00:00:00.000 Started by upstream project "autotest-per-patch" build number 126161 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "jbp-per-patch" build number 23865 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.025 The recommended git tool is: git 00:00:00.025 using credential 00000000-0000-0000-0000-000000000002 00:00:00.027 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.041 Fetching changes from the remote Git repository 00:00:00.044 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.064 Using shallow fetch with depth 1 00:00:00.064 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.064 > git --version # timeout=10 00:00:00.076 > git --version # 'git version 2.39.2' 00:00:00.076 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.090 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.090 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/41/22241/22 # timeout=5 00:00:04.451 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.466 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.482 Checking out Revision 055051402f6bd793109ccc450ac2f885bb0fdaeb (FETCH_HEAD) 00:00:04.482 > git config core.sparsecheckout # timeout=10 00:00:04.495 > git read-tree -mu HEAD # timeout=10 00:00:04.512 > git checkout -f 055051402f6bd793109ccc450ac2f885bb0fdaeb # timeout=5 00:00:04.533 Commit message: "jenkins/jjb-config: Add release-build jobs to per-patch" 00:00:04.533 > git rev-list --no-walk 8c6732c9e0fe7c9c74cd1fb560a619e554726af3 # timeout=10 00:00:04.645 [Pipeline] Start of Pipeline 00:00:04.659 [Pipeline] library 00:00:04.661 Loading library shm_lib@master 00:00:04.661 Library shm_lib@master is cached. Copying from home. 00:00:04.682 [Pipeline] node 00:00:04.693 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.695 [Pipeline] { 00:00:04.709 [Pipeline] catchError 00:00:04.711 [Pipeline] { 00:00:04.723 [Pipeline] wrap 00:00:04.730 [Pipeline] { 00:00:04.735 [Pipeline] stage 00:00:04.736 [Pipeline] { (Prologue) 00:00:04.907 [Pipeline] sh 00:00:05.192 + logger -p user.info -t JENKINS-CI 00:00:05.212 [Pipeline] echo 00:00:05.213 Node: GP11 00:00:05.222 [Pipeline] sh 00:00:05.522 [Pipeline] setCustomBuildProperty 00:00:05.532 [Pipeline] echo 00:00:05.534 Cleanup processes 00:00:05.538 [Pipeline] sh 00:00:05.825 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.825 2097542 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.838 [Pipeline] sh 00:00:06.120 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.120 ++ grep -v 'sudo pgrep' 00:00:06.120 ++ awk '{print $1}' 00:00:06.120 + sudo kill -9 00:00:06.120 + true 00:00:06.135 [Pipeline] cleanWs 00:00:06.144 [WS-CLEANUP] Deleting project workspace... 00:00:06.144 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.149 [WS-CLEANUP] done 00:00:06.153 [Pipeline] setCustomBuildProperty 00:00:06.167 [Pipeline] sh 00:00:06.444 + sudo git config --global --replace-all safe.directory '*' 00:00:06.543 [Pipeline] httpRequest 00:00:06.581 [Pipeline] echo 00:00:06.582 Sorcerer 10.211.164.101 is alive 00:00:06.591 [Pipeline] httpRequest 00:00:06.595 HttpMethod: GET 00:00:06.596 URL: http://10.211.164.101/packages/jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:06.596 Sending request to url: http://10.211.164.101/packages/jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:06.609 Response Code: HTTP/1.1 200 OK 00:00:06.609 Success: Status code 200 is in the accepted range: 200,404 00:00:06.610 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:09.670 [Pipeline] sh 00:00:09.955 + tar --no-same-owner -xf jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:09.976 [Pipeline] httpRequest 00:00:10.002 [Pipeline] echo 00:00:10.004 Sorcerer 10.211.164.101 is alive 00:00:10.012 [Pipeline] httpRequest 00:00:10.017 HttpMethod: GET 00:00:10.017 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:10.018 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:10.037 Response Code: HTTP/1.1 200 OK 00:00:10.038 Success: Status code 200 is in the accepted range: 200,404 00:00:10.038 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:55.394 [Pipeline] sh 00:00:55.677 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:58.240 [Pipeline] sh 00:00:58.520 + git -C spdk log --oneline -n5 00:00:58.520 719d03c6a sock/uring: only register net impl if supported 00:00:58.520 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:58.520 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:58.520 6c7c1f57e accel: add sequence outstanding stat 00:00:58.520 3bc8e6a26 accel: add utility to put task 00:00:58.533 [Pipeline] } 00:00:58.550 [Pipeline] // stage 00:00:58.559 [Pipeline] stage 00:00:58.561 [Pipeline] { (Prepare) 00:00:58.579 [Pipeline] writeFile 00:00:58.596 [Pipeline] sh 00:00:58.879 + logger -p user.info -t JENKINS-CI 00:00:58.892 [Pipeline] sh 00:00:59.176 + logger -p user.info -t JENKINS-CI 00:00:59.189 [Pipeline] sh 00:00:59.473 + cat autorun-spdk.conf 00:00:59.473 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.473 SPDK_TEST_NVMF=1 00:00:59.473 SPDK_TEST_NVME_CLI=1 00:00:59.473 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.473 SPDK_TEST_NVMF_NICS=e810 00:00:59.473 SPDK_TEST_VFIOUSER=1 00:00:59.473 SPDK_RUN_UBSAN=1 00:00:59.473 NET_TYPE=phy 00:00:59.483 RUN_NIGHTLY=0 00:00:59.488 [Pipeline] readFile 00:00:59.514 [Pipeline] withEnv 00:00:59.516 [Pipeline] { 00:00:59.531 [Pipeline] sh 00:00:59.821 + set -ex 00:00:59.821 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:59.821 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.821 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.821 ++ SPDK_TEST_NVMF=1 00:00:59.821 ++ SPDK_TEST_NVME_CLI=1 00:00:59.821 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.821 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.821 ++ SPDK_TEST_VFIOUSER=1 00:00:59.821 ++ SPDK_RUN_UBSAN=1 00:00:59.821 ++ NET_TYPE=phy 00:00:59.821 ++ RUN_NIGHTLY=0 00:00:59.821 + case $SPDK_TEST_NVMF_NICS in 00:00:59.821 + DRIVERS=ice 00:00:59.821 + [[ tcp == \r\d\m\a ]] 00:00:59.821 + [[ -n ice ]] 00:00:59.821 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:59.821 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:59.821 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:59.821 rmmod: ERROR: Module irdma is not currently loaded 00:00:59.821 rmmod: ERROR: Module i40iw is not currently loaded 00:00:59.821 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:59.821 + true 00:00:59.821 + for D in $DRIVERS 00:00:59.821 + sudo modprobe ice 00:00:59.821 + exit 0 00:00:59.831 [Pipeline] } 00:00:59.848 [Pipeline] // withEnv 00:00:59.852 [Pipeline] } 00:00:59.862 [Pipeline] // stage 00:00:59.867 [Pipeline] catchError 00:00:59.868 [Pipeline] { 00:00:59.877 [Pipeline] timeout 00:00:59.877 Timeout set to expire in 50 min 00:00:59.878 [Pipeline] { 00:00:59.888 [Pipeline] stage 00:00:59.889 [Pipeline] { (Tests) 00:00:59.901 [Pipeline] sh 00:01:00.184 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.184 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.184 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.184 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:00.184 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.184 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.184 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:00.184 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.184 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.184 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.184 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:00.184 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.184 + source /etc/os-release 00:01:00.184 ++ NAME='Fedora Linux' 00:01:00.184 ++ VERSION='38 (Cloud Edition)' 00:01:00.184 ++ ID=fedora 00:01:00.184 ++ VERSION_ID=38 00:01:00.184 ++ VERSION_CODENAME= 00:01:00.184 ++ PLATFORM_ID=platform:f38 00:01:00.184 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:00.184 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:00.184 ++ LOGO=fedora-logo-icon 00:01:00.184 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:00.184 ++ HOME_URL=https://fedoraproject.org/ 00:01:00.184 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:00.184 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:00.184 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:00.184 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:00.184 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:00.184 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:00.184 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:00.184 ++ SUPPORT_END=2024-05-14 00:01:00.184 ++ VARIANT='Cloud Edition' 00:01:00.184 ++ VARIANT_ID=cloud 00:01:00.184 + uname -a 00:01:00.184 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:00.184 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:01.121 Hugepages 00:01:01.121 node hugesize free / total 00:01:01.121 node0 1048576kB 0 / 0 00:01:01.121 node0 2048kB 0 / 0 00:01:01.380 node1 1048576kB 0 / 0 00:01:01.380 node1 2048kB 0 / 0 00:01:01.380 00:01:01.380 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.380 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:01.380 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:01.380 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:01.380 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:01.380 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:01.380 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:01.380 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:01.380 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:01.380 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:01.380 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:01.380 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:01.380 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:01.380 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:01.380 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:01.380 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:01.380 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:01.380 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:01.380 + rm -f /tmp/spdk-ld-path 00:01:01.380 + source autorun-spdk.conf 00:01:01.380 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.380 ++ SPDK_TEST_NVMF=1 00:01:01.380 ++ SPDK_TEST_NVME_CLI=1 00:01:01.380 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.380 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.380 ++ SPDK_TEST_VFIOUSER=1 00:01:01.380 ++ SPDK_RUN_UBSAN=1 00:01:01.380 ++ NET_TYPE=phy 00:01:01.380 ++ RUN_NIGHTLY=0 00:01:01.380 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.380 + [[ -n '' ]] 00:01:01.380 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.380 + for M in /var/spdk/build-*-manifest.txt 00:01:01.380 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.380 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.380 + for M in /var/spdk/build-*-manifest.txt 00:01:01.380 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.380 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.380 ++ uname 00:01:01.380 + [[ Linux == \L\i\n\u\x ]] 00:01:01.380 + sudo dmesg -T 00:01:01.380 + sudo dmesg --clear 00:01:01.380 + dmesg_pid=2098215 00:01:01.380 + [[ Fedora Linux == FreeBSD ]] 00:01:01.380 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.380 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.380 + sudo dmesg -Tw 00:01:01.380 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.380 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.380 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.380 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.380 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.380 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.380 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.380 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.380 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.380 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.380 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.380 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.380 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.380 Test configuration: 00:01:01.380 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.381 SPDK_TEST_NVMF=1 00:01:01.381 SPDK_TEST_NVME_CLI=1 00:01:01.381 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.381 SPDK_TEST_NVMF_NICS=e810 00:01:01.381 SPDK_TEST_VFIOUSER=1 00:01:01.381 SPDK_RUN_UBSAN=1 00:01:01.381 NET_TYPE=phy 00:01:01.381 RUN_NIGHTLY=0 10:13:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.381 10:13:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.381 10:13:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.381 10:13:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.381 10:13:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.381 10:13:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.381 10:13:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.381 10:13:55 -- paths/export.sh@5 -- $ export PATH 00:01:01.381 10:13:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.381 10:13:55 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.381 10:13:56 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:01.381 10:13:56 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721031236.XXXXXX 00:01:01.381 10:13:56 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721031236.bzPJep 00:01:01.381 10:13:56 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:01.381 10:13:56 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:01.381 10:13:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:01.381 10:13:56 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.381 10:13:56 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.381 10:13:56 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:01.381 10:13:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:01.381 10:13:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.381 10:13:56 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:01.381 10:13:56 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:01.381 10:13:56 -- pm/common@17 -- $ local monitor 00:01:01.381 10:13:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.381 10:13:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.381 10:13:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.381 10:13:56 -- pm/common@21 -- $ date +%s 00:01:01.381 10:13:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.381 10:13:56 -- pm/common@21 -- $ date +%s 00:01:01.381 10:13:56 -- pm/common@25 -- $ sleep 1 00:01:01.381 10:13:56 -- pm/common@21 -- $ date +%s 00:01:01.381 10:13:56 -- pm/common@21 -- $ date +%s 00:01:01.381 10:13:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721031236 00:01:01.381 10:13:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721031236 00:01:01.381 10:13:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721031236 00:01:01.381 10:13:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721031236 00:01:01.642 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721031236_collect-vmstat.pm.log 00:01:01.642 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721031236_collect-cpu-load.pm.log 00:01:01.642 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721031236_collect-cpu-temp.pm.log 00:01:01.642 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721031236_collect-bmc-pm.bmc.pm.log 00:01:02.577 10:13:57 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:02.577 10:13:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.577 10:13:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.577 10:13:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.577 10:13:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.577 Mon Jul 15 08:13:57 AM UTC 2024 00:01:02.577 10:13:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.577 v24.09-pre-202-g719d03c6a 00:01:02.577 10:13:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:02.577 10:13:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.577 10:13:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.577 10:13:57 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:02.577 10:13:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:02.577 10:13:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.577 ************************************ 00:01:02.577 START TEST ubsan 00:01:02.577 ************************************ 00:01:02.577 10:13:57 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:02.577 using ubsan 00:01:02.577 00:01:02.577 real 0m0.000s 00:01:02.577 user 0m0.000s 00:01:02.577 sys 0m0.000s 00:01:02.577 10:13:57 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:02.577 10:13:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.577 ************************************ 00:01:02.577 END TEST ubsan 00:01:02.577 ************************************ 00:01:02.577 10:13:57 -- common/autotest_common.sh@1142 -- $ return 0 00:01:02.577 10:13:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.577 10:13:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.577 10:13:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.577 10:13:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.577 10:13:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.577 10:13:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.577 10:13:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.577 10:13:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.577 10:13:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:02.577 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:02.577 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:02.835 Using 'verbs' RDMA provider 00:01:13.417 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:23.426 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:23.426 Creating mk/config.mk...done. 00:01:23.426 Creating mk/cc.flags.mk...done. 00:01:23.426 Type 'make' to build. 00:01:23.426 10:14:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:23.426 10:14:17 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:23.426 10:14:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:23.426 10:14:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.426 ************************************ 00:01:23.426 START TEST make 00:01:23.426 ************************************ 00:01:23.426 10:14:17 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:23.426 make[1]: Nothing to be done for 'all'. 00:01:24.856 The Meson build system 00:01:24.856 Version: 1.3.1 00:01:24.856 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:24.856 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.856 Build type: native build 00:01:24.856 Project name: libvfio-user 00:01:24.856 Project version: 0.0.1 00:01:24.856 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:24.856 C linker for the host machine: cc ld.bfd 2.39-16 00:01:24.856 Host machine cpu family: x86_64 00:01:24.856 Host machine cpu: x86_64 00:01:24.856 Run-time dependency threads found: YES 00:01:24.856 Library dl found: YES 00:01:24.856 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:24.856 Run-time dependency json-c found: YES 0.17 00:01:24.856 Run-time dependency cmocka found: YES 1.1.7 00:01:24.856 Program pytest-3 found: NO 00:01:24.856 Program flake8 found: NO 00:01:24.856 Program misspell-fixer found: NO 00:01:24.856 Program restructuredtext-lint found: NO 00:01:24.856 Program valgrind found: YES (/usr/bin/valgrind) 00:01:24.856 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.856 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.856 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.856 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.856 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:24.856 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:24.856 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.856 Build targets in project: 8 00:01:24.856 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:24.856 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:24.856 00:01:24.856 libvfio-user 0.0.1 00:01:24.856 00:01:24.856 User defined options 00:01:24.856 buildtype : debug 00:01:24.856 default_library: shared 00:01:24.856 libdir : /usr/local/lib 00:01:24.856 00:01:24.856 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:25.810 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.810 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:25.810 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:25.810 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:25.810 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:25.810 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:25.810 [6/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:25.810 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:25.810 [8/37] Compiling C object samples/null.p/null.c.o 00:01:25.810 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:25.810 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:25.810 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:25.810 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:25.810 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:25.810 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:25.810 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:26.075 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:26.075 [17/37] Compiling C object samples/client.p/client.c.o 00:01:26.075 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:26.075 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:26.075 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:26.075 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:26.075 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:26.075 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:26.075 [24/37] Compiling C object samples/server.p/server.c.o 00:01:26.075 [25/37] Linking target samples/client 00:01:26.075 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:26.075 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:26.075 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:26.075 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:26.336 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:26.336 [31/37] Linking target test/unit_tests 00:01:26.336 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:26.336 [33/37] Linking target samples/null 00:01:26.336 [34/37] Linking target samples/server 00:01:26.600 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:26.600 [36/37] Linking target samples/gpio-pci-idio-16 00:01:26.600 [37/37] Linking target samples/lspci 00:01:26.600 INFO: autodetecting backend as ninja 00:01:26.600 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:26.600 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.170 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:27.170 ninja: no work to do. 00:01:32.470 The Meson build system 00:01:32.470 Version: 1.3.1 00:01:32.470 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:32.470 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:32.470 Build type: native build 00:01:32.470 Program cat found: YES (/usr/bin/cat) 00:01:32.470 Project name: DPDK 00:01:32.470 Project version: 24.03.0 00:01:32.470 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:32.470 C linker for the host machine: cc ld.bfd 2.39-16 00:01:32.470 Host machine cpu family: x86_64 00:01:32.470 Host machine cpu: x86_64 00:01:32.470 Message: ## Building in Developer Mode ## 00:01:32.470 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:32.470 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:32.470 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:32.470 Program python3 found: YES (/usr/bin/python3) 00:01:32.470 Program cat found: YES (/usr/bin/cat) 00:01:32.470 Compiler for C supports arguments -march=native: YES 00:01:32.470 Checking for size of "void *" : 8 00:01:32.470 Checking for size of "void *" : 8 (cached) 00:01:32.470 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:32.470 Library m found: YES 00:01:32.470 Library numa found: YES 00:01:32.470 Has header "numaif.h" : YES 00:01:32.470 Library fdt found: NO 00:01:32.470 Library execinfo found: NO 00:01:32.470 Has header "execinfo.h" : YES 00:01:32.470 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:32.470 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:32.470 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:32.470 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:32.470 Run-time dependency openssl found: YES 3.0.9 00:01:32.470 Run-time dependency libpcap found: YES 1.10.4 00:01:32.470 Has header "pcap.h" with dependency libpcap: YES 00:01:32.470 Compiler for C supports arguments -Wcast-qual: YES 00:01:32.470 Compiler for C supports arguments -Wdeprecated: YES 00:01:32.471 Compiler for C supports arguments -Wformat: YES 00:01:32.471 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:32.471 Compiler for C supports arguments -Wformat-security: NO 00:01:32.471 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.471 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:32.471 Compiler for C supports arguments -Wnested-externs: YES 00:01:32.471 Compiler for C supports arguments -Wold-style-definition: YES 00:01:32.471 Compiler for C supports arguments -Wpointer-arith: YES 00:01:32.471 Compiler for C supports arguments -Wsign-compare: YES 00:01:32.471 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:32.471 Compiler for C supports arguments -Wundef: YES 00:01:32.471 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.471 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:32.471 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:32.471 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.471 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:32.471 Program objdump found: YES (/usr/bin/objdump) 00:01:32.471 Compiler for C supports arguments -mavx512f: YES 00:01:32.471 Checking if "AVX512 checking" compiles: YES 00:01:32.471 Fetching value of define "__SSE4_2__" : 1 00:01:32.471 Fetching value of define "__AES__" : 1 00:01:32.471 Fetching value of define "__AVX__" : 1 00:01:32.471 Fetching value of define "__AVX2__" : (undefined) 00:01:32.471 Fetching value of define "__AVX512BW__" : (undefined) 00:01:32.471 Fetching value of define "__AVX512CD__" : (undefined) 00:01:32.471 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:32.471 Fetching value of define "__AVX512F__" : (undefined) 00:01:32.471 Fetching value of define "__AVX512VL__" : (undefined) 00:01:32.471 Fetching value of define "__PCLMUL__" : 1 00:01:32.471 Fetching value of define "__RDRND__" : 1 00:01:32.471 Fetching value of define "__RDSEED__" : (undefined) 00:01:32.471 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:32.471 Fetching value of define "__znver1__" : (undefined) 00:01:32.471 Fetching value of define "__znver2__" : (undefined) 00:01:32.471 Fetching value of define "__znver3__" : (undefined) 00:01:32.471 Fetching value of define "__znver4__" : (undefined) 00:01:32.471 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:32.471 Message: lib/log: Defining dependency "log" 00:01:32.471 Message: lib/kvargs: Defining dependency "kvargs" 00:01:32.471 Message: lib/telemetry: Defining dependency "telemetry" 00:01:32.471 Checking for function "getentropy" : NO 00:01:32.471 Message: lib/eal: Defining dependency "eal" 00:01:32.471 Message: lib/ring: Defining dependency "ring" 00:01:32.471 Message: lib/rcu: Defining dependency "rcu" 00:01:32.471 Message: lib/mempool: Defining dependency "mempool" 00:01:32.471 Message: lib/mbuf: Defining dependency "mbuf" 00:01:32.471 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:32.471 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.471 Compiler for C supports arguments -mpclmul: YES 00:01:32.471 Compiler for C supports arguments -maes: YES 00:01:32.471 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:32.471 Compiler for C supports arguments -mavx512bw: YES 00:01:32.471 Compiler for C supports arguments -mavx512dq: YES 00:01:32.471 Compiler for C supports arguments -mavx512vl: YES 00:01:32.471 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:32.471 Compiler for C supports arguments -mavx2: YES 00:01:32.471 Compiler for C supports arguments -mavx: YES 00:01:32.471 Message: lib/net: Defining dependency "net" 00:01:32.471 Message: lib/meter: Defining dependency "meter" 00:01:32.471 Message: lib/ethdev: Defining dependency "ethdev" 00:01:32.471 Message: lib/pci: Defining dependency "pci" 00:01:32.471 Message: lib/cmdline: Defining dependency "cmdline" 00:01:32.471 Message: lib/hash: Defining dependency "hash" 00:01:32.471 Message: lib/timer: Defining dependency "timer" 00:01:32.471 Message: lib/compressdev: Defining dependency "compressdev" 00:01:32.471 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:32.471 Message: lib/dmadev: Defining dependency "dmadev" 00:01:32.471 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:32.471 Message: lib/power: Defining dependency "power" 00:01:32.471 Message: lib/reorder: Defining dependency "reorder" 00:01:32.471 Message: lib/security: Defining dependency "security" 00:01:32.471 Has header "linux/userfaultfd.h" : YES 00:01:32.471 Has header "linux/vduse.h" : YES 00:01:32.471 Message: lib/vhost: Defining dependency "vhost" 00:01:32.471 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:32.471 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:32.471 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:32.471 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:32.471 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:32.471 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:32.471 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:32.471 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:32.471 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:32.471 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:32.471 Program doxygen found: YES (/usr/bin/doxygen) 00:01:32.471 Configuring doxy-api-html.conf using configuration 00:01:32.471 Configuring doxy-api-man.conf using configuration 00:01:32.471 Program mandb found: YES (/usr/bin/mandb) 00:01:32.471 Program sphinx-build found: NO 00:01:32.471 Configuring rte_build_config.h using configuration 00:01:32.471 Message: 00:01:32.471 ================= 00:01:32.471 Applications Enabled 00:01:32.471 ================= 00:01:32.471 00:01:32.471 apps: 00:01:32.471 00:01:32.471 00:01:32.471 Message: 00:01:32.471 ================= 00:01:32.471 Libraries Enabled 00:01:32.471 ================= 00:01:32.471 00:01:32.471 libs: 00:01:32.471 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:32.471 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:32.471 cryptodev, dmadev, power, reorder, security, vhost, 00:01:32.471 00:01:32.471 Message: 00:01:32.471 =============== 00:01:32.471 Drivers Enabled 00:01:32.471 =============== 00:01:32.471 00:01:32.471 common: 00:01:32.471 00:01:32.471 bus: 00:01:32.471 pci, vdev, 00:01:32.471 mempool: 00:01:32.471 ring, 00:01:32.471 dma: 00:01:32.471 00:01:32.471 net: 00:01:32.471 00:01:32.471 crypto: 00:01:32.471 00:01:32.471 compress: 00:01:32.471 00:01:32.471 vdpa: 00:01:32.471 00:01:32.471 00:01:32.471 Message: 00:01:32.471 ================= 00:01:32.471 Content Skipped 00:01:32.471 ================= 00:01:32.471 00:01:32.471 apps: 00:01:32.471 dumpcap: explicitly disabled via build config 00:01:32.471 graph: explicitly disabled via build config 00:01:32.471 pdump: explicitly disabled via build config 00:01:32.471 proc-info: explicitly disabled via build config 00:01:32.471 test-acl: explicitly disabled via build config 00:01:32.471 test-bbdev: explicitly disabled via build config 00:01:32.471 test-cmdline: explicitly disabled via build config 00:01:32.471 test-compress-perf: explicitly disabled via build config 00:01:32.471 test-crypto-perf: explicitly disabled via build config 00:01:32.471 test-dma-perf: explicitly disabled via build config 00:01:32.471 test-eventdev: explicitly disabled via build config 00:01:32.471 test-fib: explicitly disabled via build config 00:01:32.471 test-flow-perf: explicitly disabled via build config 00:01:32.471 test-gpudev: explicitly disabled via build config 00:01:32.471 test-mldev: explicitly disabled via build config 00:01:32.471 test-pipeline: explicitly disabled via build config 00:01:32.471 test-pmd: explicitly disabled via build config 00:01:32.471 test-regex: explicitly disabled via build config 00:01:32.471 test-sad: explicitly disabled via build config 00:01:32.471 test-security-perf: explicitly disabled via build config 00:01:32.471 00:01:32.471 libs: 00:01:32.471 argparse: explicitly disabled via build config 00:01:32.471 metrics: explicitly disabled via build config 00:01:32.471 acl: explicitly disabled via build config 00:01:32.471 bbdev: explicitly disabled via build config 00:01:32.471 bitratestats: explicitly disabled via build config 00:01:32.471 bpf: explicitly disabled via build config 00:01:32.471 cfgfile: explicitly disabled via build config 00:01:32.471 distributor: explicitly disabled via build config 00:01:32.471 efd: explicitly disabled via build config 00:01:32.471 eventdev: explicitly disabled via build config 00:01:32.471 dispatcher: explicitly disabled via build config 00:01:32.471 gpudev: explicitly disabled via build config 00:01:32.471 gro: explicitly disabled via build config 00:01:32.471 gso: explicitly disabled via build config 00:01:32.471 ip_frag: explicitly disabled via build config 00:01:32.471 jobstats: explicitly disabled via build config 00:01:32.471 latencystats: explicitly disabled via build config 00:01:32.471 lpm: explicitly disabled via build config 00:01:32.471 member: explicitly disabled via build config 00:01:32.471 pcapng: explicitly disabled via build config 00:01:32.471 rawdev: explicitly disabled via build config 00:01:32.471 regexdev: explicitly disabled via build config 00:01:32.471 mldev: explicitly disabled via build config 00:01:32.471 rib: explicitly disabled via build config 00:01:32.471 sched: explicitly disabled via build config 00:01:32.471 stack: explicitly disabled via build config 00:01:32.471 ipsec: explicitly disabled via build config 00:01:32.471 pdcp: explicitly disabled via build config 00:01:32.471 fib: explicitly disabled via build config 00:01:32.471 port: explicitly disabled via build config 00:01:32.471 pdump: explicitly disabled via build config 00:01:32.471 table: explicitly disabled via build config 00:01:32.471 pipeline: explicitly disabled via build config 00:01:32.471 graph: explicitly disabled via build config 00:01:32.471 node: explicitly disabled via build config 00:01:32.471 00:01:32.471 drivers: 00:01:32.471 common/cpt: not in enabled drivers build config 00:01:32.471 common/dpaax: not in enabled drivers build config 00:01:32.471 common/iavf: not in enabled drivers build config 00:01:32.471 common/idpf: not in enabled drivers build config 00:01:32.471 common/ionic: not in enabled drivers build config 00:01:32.471 common/mvep: not in enabled drivers build config 00:01:32.471 common/octeontx: not in enabled drivers build config 00:01:32.471 bus/auxiliary: not in enabled drivers build config 00:01:32.471 bus/cdx: not in enabled drivers build config 00:01:32.471 bus/dpaa: not in enabled drivers build config 00:01:32.471 bus/fslmc: not in enabled drivers build config 00:01:32.471 bus/ifpga: not in enabled drivers build config 00:01:32.471 bus/platform: not in enabled drivers build config 00:01:32.471 bus/uacce: not in enabled drivers build config 00:01:32.471 bus/vmbus: not in enabled drivers build config 00:01:32.471 common/cnxk: not in enabled drivers build config 00:01:32.472 common/mlx5: not in enabled drivers build config 00:01:32.472 common/nfp: not in enabled drivers build config 00:01:32.472 common/nitrox: not in enabled drivers build config 00:01:32.472 common/qat: not in enabled drivers build config 00:01:32.472 common/sfc_efx: not in enabled drivers build config 00:01:32.472 mempool/bucket: not in enabled drivers build config 00:01:32.472 mempool/cnxk: not in enabled drivers build config 00:01:32.472 mempool/dpaa: not in enabled drivers build config 00:01:32.472 mempool/dpaa2: not in enabled drivers build config 00:01:32.472 mempool/octeontx: not in enabled drivers build config 00:01:32.472 mempool/stack: not in enabled drivers build config 00:01:32.472 dma/cnxk: not in enabled drivers build config 00:01:32.472 dma/dpaa: not in enabled drivers build config 00:01:32.472 dma/dpaa2: not in enabled drivers build config 00:01:32.472 dma/hisilicon: not in enabled drivers build config 00:01:32.472 dma/idxd: not in enabled drivers build config 00:01:32.472 dma/ioat: not in enabled drivers build config 00:01:32.472 dma/skeleton: not in enabled drivers build config 00:01:32.472 net/af_packet: not in enabled drivers build config 00:01:32.472 net/af_xdp: not in enabled drivers build config 00:01:32.472 net/ark: not in enabled drivers build config 00:01:32.472 net/atlantic: not in enabled drivers build config 00:01:32.472 net/avp: not in enabled drivers build config 00:01:32.472 net/axgbe: not in enabled drivers build config 00:01:32.472 net/bnx2x: not in enabled drivers build config 00:01:32.472 net/bnxt: not in enabled drivers build config 00:01:32.472 net/bonding: not in enabled drivers build config 00:01:32.472 net/cnxk: not in enabled drivers build config 00:01:32.472 net/cpfl: not in enabled drivers build config 00:01:32.472 net/cxgbe: not in enabled drivers build config 00:01:32.472 net/dpaa: not in enabled drivers build config 00:01:32.472 net/dpaa2: not in enabled drivers build config 00:01:32.472 net/e1000: not in enabled drivers build config 00:01:32.472 net/ena: not in enabled drivers build config 00:01:32.472 net/enetc: not in enabled drivers build config 00:01:32.472 net/enetfec: not in enabled drivers build config 00:01:32.472 net/enic: not in enabled drivers build config 00:01:32.472 net/failsafe: not in enabled drivers build config 00:01:32.472 net/fm10k: not in enabled drivers build config 00:01:32.472 net/gve: not in enabled drivers build config 00:01:32.472 net/hinic: not in enabled drivers build config 00:01:32.472 net/hns3: not in enabled drivers build config 00:01:32.472 net/i40e: not in enabled drivers build config 00:01:32.472 net/iavf: not in enabled drivers build config 00:01:32.472 net/ice: not in enabled drivers build config 00:01:32.472 net/idpf: not in enabled drivers build config 00:01:32.472 net/igc: not in enabled drivers build config 00:01:32.472 net/ionic: not in enabled drivers build config 00:01:32.472 net/ipn3ke: not in enabled drivers build config 00:01:32.472 net/ixgbe: not in enabled drivers build config 00:01:32.472 net/mana: not in enabled drivers build config 00:01:32.472 net/memif: not in enabled drivers build config 00:01:32.472 net/mlx4: not in enabled drivers build config 00:01:32.472 net/mlx5: not in enabled drivers build config 00:01:32.472 net/mvneta: not in enabled drivers build config 00:01:32.472 net/mvpp2: not in enabled drivers build config 00:01:32.472 net/netvsc: not in enabled drivers build config 00:01:32.472 net/nfb: not in enabled drivers build config 00:01:32.472 net/nfp: not in enabled drivers build config 00:01:32.472 net/ngbe: not in enabled drivers build config 00:01:32.472 net/null: not in enabled drivers build config 00:01:32.472 net/octeontx: not in enabled drivers build config 00:01:32.472 net/octeon_ep: not in enabled drivers build config 00:01:32.472 net/pcap: not in enabled drivers build config 00:01:32.472 net/pfe: not in enabled drivers build config 00:01:32.472 net/qede: not in enabled drivers build config 00:01:32.472 net/ring: not in enabled drivers build config 00:01:32.472 net/sfc: not in enabled drivers build config 00:01:32.472 net/softnic: not in enabled drivers build config 00:01:32.472 net/tap: not in enabled drivers build config 00:01:32.472 net/thunderx: not in enabled drivers build config 00:01:32.472 net/txgbe: not in enabled drivers build config 00:01:32.472 net/vdev_netvsc: not in enabled drivers build config 00:01:32.472 net/vhost: not in enabled drivers build config 00:01:32.472 net/virtio: not in enabled drivers build config 00:01:32.472 net/vmxnet3: not in enabled drivers build config 00:01:32.472 raw/*: missing internal dependency, "rawdev" 00:01:32.472 crypto/armv8: not in enabled drivers build config 00:01:32.472 crypto/bcmfs: not in enabled drivers build config 00:01:32.472 crypto/caam_jr: not in enabled drivers build config 00:01:32.472 crypto/ccp: not in enabled drivers build config 00:01:32.472 crypto/cnxk: not in enabled drivers build config 00:01:32.472 crypto/dpaa_sec: not in enabled drivers build config 00:01:32.472 crypto/dpaa2_sec: not in enabled drivers build config 00:01:32.472 crypto/ipsec_mb: not in enabled drivers build config 00:01:32.472 crypto/mlx5: not in enabled drivers build config 00:01:32.472 crypto/mvsam: not in enabled drivers build config 00:01:32.472 crypto/nitrox: not in enabled drivers build config 00:01:32.472 crypto/null: not in enabled drivers build config 00:01:32.472 crypto/octeontx: not in enabled drivers build config 00:01:32.472 crypto/openssl: not in enabled drivers build config 00:01:32.472 crypto/scheduler: not in enabled drivers build config 00:01:32.472 crypto/uadk: not in enabled drivers build config 00:01:32.472 crypto/virtio: not in enabled drivers build config 00:01:32.472 compress/isal: not in enabled drivers build config 00:01:32.472 compress/mlx5: not in enabled drivers build config 00:01:32.472 compress/nitrox: not in enabled drivers build config 00:01:32.472 compress/octeontx: not in enabled drivers build config 00:01:32.472 compress/zlib: not in enabled drivers build config 00:01:32.472 regex/*: missing internal dependency, "regexdev" 00:01:32.472 ml/*: missing internal dependency, "mldev" 00:01:32.472 vdpa/ifc: not in enabled drivers build config 00:01:32.472 vdpa/mlx5: not in enabled drivers build config 00:01:32.472 vdpa/nfp: not in enabled drivers build config 00:01:32.472 vdpa/sfc: not in enabled drivers build config 00:01:32.472 event/*: missing internal dependency, "eventdev" 00:01:32.472 baseband/*: missing internal dependency, "bbdev" 00:01:32.472 gpu/*: missing internal dependency, "gpudev" 00:01:32.472 00:01:32.472 00:01:32.472 Build targets in project: 85 00:01:32.472 00:01:32.472 DPDK 24.03.0 00:01:32.472 00:01:32.472 User defined options 00:01:32.472 buildtype : debug 00:01:32.472 default_library : shared 00:01:32.472 libdir : lib 00:01:32.472 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:32.472 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:32.472 c_link_args : 00:01:32.472 cpu_instruction_set: native 00:01:32.472 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:32.472 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:32.472 enable_docs : false 00:01:32.472 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:32.472 enable_kmods : false 00:01:32.472 max_lcores : 128 00:01:32.472 tests : false 00:01:32.472 00:01:32.472 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:32.472 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:32.472 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:32.472 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:32.472 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:32.472 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:32.472 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:32.472 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:32.472 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:32.472 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:32.472 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:32.472 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:32.472 [11/268] Linking static target lib/librte_kvargs.a 00:01:32.729 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:32.729 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:32.730 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:32.730 [15/268] Linking static target lib/librte_log.a 00:01:32.730 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:33.303 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.303 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:33.303 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:33.303 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:33.303 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:33.303 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:33.303 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:33.303 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:33.303 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:33.303 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:33.303 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:33.303 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:33.303 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:33.303 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:33.566 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:33.566 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:33.566 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:33.566 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:33.566 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:33.566 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:33.566 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:33.566 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:33.566 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:33.566 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:33.566 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:33.566 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:33.566 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:33.566 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:33.566 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:33.566 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:33.566 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:33.566 [48/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:33.566 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:33.566 [50/268] Linking static target lib/librte_telemetry.a 00:01:33.566 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:33.566 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:33.566 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:33.566 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:33.566 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:33.566 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:33.566 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:33.566 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:33.566 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:33.566 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:33.826 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:33.826 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:33.826 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:33.826 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:33.826 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:33.826 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.088 [67/268] Linking target lib/librte_log.so.24.1 00:01:34.088 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:34.088 [69/268] Linking static target lib/librte_pci.a 00:01:34.088 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:34.088 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:34.088 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:34.088 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:34.349 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:34.349 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:34.349 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:34.349 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:34.349 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:34.349 [79/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:34.349 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:34.349 [81/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:34.349 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:34.349 [83/268] Linking target lib/librte_kvargs.so.24.1 00:01:34.349 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:34.349 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:34.349 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:34.349 [87/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.349 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:34.349 [89/268] Linking static target lib/librte_ring.a 00:01:34.349 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:34.349 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:34.349 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:34.349 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:34.349 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:34.349 [95/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:34.610 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:34.610 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:34.610 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:34.611 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:34.611 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:34.611 [101/268] Linking static target lib/librte_meter.a 00:01:34.611 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:34.611 [103/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.611 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:34.611 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:34.611 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:34.611 [107/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:34.611 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:34.611 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:34.611 [110/268] Linking target lib/librte_telemetry.so.24.1 00:01:34.611 [111/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:34.611 [112/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:34.611 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:34.611 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:34.611 [115/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:34.611 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:34.611 [117/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:34.611 [118/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:34.611 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:34.611 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:34.611 [121/268] Linking static target lib/librte_rcu.a 00:01:34.611 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:34.868 [123/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:34.868 [124/268] Linking static target lib/librte_eal.a 00:01:34.868 [125/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:34.868 [126/268] Linking static target lib/librte_mempool.a 00:01:34.868 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:34.868 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:34.868 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:34.868 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:34.868 [131/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:34.868 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.129 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:35.129 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:35.129 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:35.129 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:35.129 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.129 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:35.129 [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:35.129 [140/268] Linking static target lib/librte_net.a 00:01:35.129 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:35.129 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:35.389 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:35.389 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:35.389 [145/268] Linking static target lib/librte_cmdline.a 00:01:35.389 [146/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.389 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:35.389 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:35.389 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:35.389 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:35.389 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:35.389 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:35.389 [153/268] Linking static target lib/librte_timer.a 00:01:35.646 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:35.646 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:35.646 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.646 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.646 [158/268] Linking static target lib/librte_dmadev.a 00:01:35.646 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:35.646 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:35.646 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:35.646 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:35.646 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:35.646 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:35.646 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:35.904 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:35.904 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:35.904 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:35.904 [169/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.904 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:35.904 [171/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:35.904 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:35.904 [173/268] Linking static target lib/librte_power.a 00:01:35.904 [174/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.904 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:35.904 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:35.904 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:35.904 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:35.904 [179/268] Linking static target lib/librte_compressdev.a 00:01:35.904 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:36.161 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:36.161 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:36.161 [183/268] Linking static target lib/librte_hash.a 00:01:36.161 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:36.161 [185/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:36.161 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:36.161 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.161 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:36.161 [189/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:36.161 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:36.161 [191/268] Linking static target lib/librte_mbuf.a 00:01:36.161 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:36.161 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:36.161 [194/268] Linking static target lib/librte_reorder.a 00:01:36.161 [195/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.418 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:36.418 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.418 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.418 [199/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:36.418 [200/268] Linking static target drivers/librte_bus_vdev.a 00:01:36.418 [201/268] Linking static target lib/librte_security.a 00:01:36.418 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:36.418 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:36.418 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:36.418 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.418 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.418 [207/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:36.418 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:36.418 [209/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:36.418 [210/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.418 [211/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.418 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.418 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.675 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.675 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.675 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.675 [217/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:36.675 [218/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.675 [219/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.675 [220/268] Linking static target drivers/librte_mempool_ring.a 00:01:36.675 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.675 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:36.675 [223/268] Linking static target lib/librte_ethdev.a 00:01:36.932 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.932 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.932 [226/268] Linking static target lib/librte_cryptodev.a 00:01:38.303 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.867 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:41.396 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.396 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.396 [231/268] Linking target lib/librte_eal.so.24.1 00:01:41.396 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:41.396 [233/268] Linking target lib/librte_pci.so.24.1 00:01:41.396 [234/268] Linking target lib/librte_timer.so.24.1 00:01:41.396 [235/268] Linking target lib/librte_ring.so.24.1 00:01:41.396 [236/268] Linking target lib/librte_meter.so.24.1 00:01:41.396 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:41.396 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:41.396 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:41.396 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:41.396 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:41.396 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:41.396 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:41.396 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:41.396 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:41.396 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:41.396 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:41.396 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:41.396 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:41.396 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:41.653 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:41.653 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:41.653 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:41.653 [254/268] Linking target lib/librte_net.so.24.1 00:01:41.653 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:41.912 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:41.912 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:41.912 [258/268] Linking target lib/librte_security.so.24.1 00:01:41.912 [259/268] Linking target lib/librte_hash.so.24.1 00:01:41.912 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:41.912 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:41.912 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:41.912 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:41.912 [264/268] Linking target lib/librte_power.so.24.1 00:01:44.445 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:44.445 [266/268] Linking static target lib/librte_vhost.a 00:01:45.381 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.381 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:45.381 INFO: autodetecting backend as ninja 00:01:45.381 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:46.314 CC lib/log/log.o 00:01:46.314 CC lib/log/log_flags.o 00:01:46.314 CC lib/ut/ut.o 00:01:46.314 CC lib/log/log_deprecated.o 00:01:46.314 CC lib/ut_mock/mock.o 00:01:46.314 LIB libspdk_ut.a 00:01:46.314 LIB libspdk_log.a 00:01:46.314 SO libspdk_ut.so.2.0 00:01:46.573 LIB libspdk_ut_mock.a 00:01:46.573 SO libspdk_log.so.7.0 00:01:46.573 SO libspdk_ut_mock.so.6.0 00:01:46.573 SYMLINK libspdk_ut.so 00:01:46.573 SYMLINK libspdk_log.so 00:01:46.573 SYMLINK libspdk_ut_mock.so 00:01:46.573 CXX lib/trace_parser/trace.o 00:01:46.573 CC lib/dma/dma.o 00:01:46.573 CC lib/ioat/ioat.o 00:01:46.573 CC lib/util/base64.o 00:01:46.573 CC lib/util/bit_array.o 00:01:46.573 CC lib/util/cpuset.o 00:01:46.573 CC lib/util/crc16.o 00:01:46.573 CC lib/util/crc32.o 00:01:46.573 CC lib/util/crc32c.o 00:01:46.573 CC lib/util/crc32_ieee.o 00:01:46.573 CC lib/util/crc64.o 00:01:46.573 CC lib/util/dif.o 00:01:46.573 CC lib/util/fd.o 00:01:46.573 CC lib/util/file.o 00:01:46.573 CC lib/util/hexlify.o 00:01:46.573 CC lib/util/iov.o 00:01:46.573 CC lib/util/math.o 00:01:46.573 CC lib/util/pipe.o 00:01:46.573 CC lib/util/strerror_tls.o 00:01:46.573 CC lib/util/string.o 00:01:46.573 CC lib/util/uuid.o 00:01:46.573 CC lib/util/fd_group.o 00:01:46.573 CC lib/util/zipf.o 00:01:46.573 CC lib/util/xor.o 00:01:46.831 CC lib/vfio_user/host/vfio_user_pci.o 00:01:46.831 CC lib/vfio_user/host/vfio_user.o 00:01:46.831 LIB libspdk_dma.a 00:01:46.831 SO libspdk_dma.so.4.0 00:01:47.123 SYMLINK libspdk_dma.so 00:01:47.123 LIB libspdk_ioat.a 00:01:47.123 SO libspdk_ioat.so.7.0 00:01:47.123 LIB libspdk_vfio_user.a 00:01:47.123 SYMLINK libspdk_ioat.so 00:01:47.123 SO libspdk_vfio_user.so.5.0 00:01:47.123 SYMLINK libspdk_vfio_user.so 00:01:47.123 LIB libspdk_util.a 00:01:47.387 SO libspdk_util.so.9.1 00:01:47.387 SYMLINK libspdk_util.so 00:01:47.645 CC lib/conf/conf.o 00:01:47.645 CC lib/rdma_provider/common.o 00:01:47.645 CC lib/vmd/vmd.o 00:01:47.645 CC lib/env_dpdk/env.o 00:01:47.645 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:47.645 CC lib/json/json_parse.o 00:01:47.645 CC lib/vmd/led.o 00:01:47.645 CC lib/env_dpdk/memory.o 00:01:47.645 CC lib/json/json_util.o 00:01:47.645 CC lib/env_dpdk/pci.o 00:01:47.645 CC lib/env_dpdk/init.o 00:01:47.645 CC lib/json/json_write.o 00:01:47.645 CC lib/idxd/idxd.o 00:01:47.645 CC lib/rdma_utils/rdma_utils.o 00:01:47.645 CC lib/env_dpdk/threads.o 00:01:47.645 CC lib/env_dpdk/pci_ioat.o 00:01:47.645 CC lib/idxd/idxd_user.o 00:01:47.645 CC lib/env_dpdk/pci_virtio.o 00:01:47.645 CC lib/idxd/idxd_kernel.o 00:01:47.645 CC lib/env_dpdk/pci_vmd.o 00:01:47.645 CC lib/env_dpdk/pci_idxd.o 00:01:47.645 CC lib/env_dpdk/pci_event.o 00:01:47.645 CC lib/env_dpdk/sigbus_handler.o 00:01:47.645 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:47.645 CC lib/env_dpdk/pci_dpdk.o 00:01:47.645 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:47.645 LIB libspdk_trace_parser.a 00:01:47.645 SO libspdk_trace_parser.so.5.0 00:01:47.645 SYMLINK libspdk_trace_parser.so 00:01:47.903 LIB libspdk_rdma_provider.a 00:01:47.903 SO libspdk_rdma_provider.so.6.0 00:01:47.903 LIB libspdk_conf.a 00:01:47.903 SO libspdk_conf.so.6.0 00:01:47.903 LIB libspdk_rdma_utils.a 00:01:47.903 SYMLINK libspdk_rdma_provider.so 00:01:47.903 SO libspdk_rdma_utils.so.1.0 00:01:47.903 LIB libspdk_json.a 00:01:47.903 SYMLINK libspdk_conf.so 00:01:47.903 SO libspdk_json.so.6.0 00:01:47.903 SYMLINK libspdk_rdma_utils.so 00:01:47.903 SYMLINK libspdk_json.so 00:01:48.162 CC lib/jsonrpc/jsonrpc_server.o 00:01:48.162 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:48.162 CC lib/jsonrpc/jsonrpc_client.o 00:01:48.162 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:48.162 LIB libspdk_idxd.a 00:01:48.162 SO libspdk_idxd.so.12.0 00:01:48.162 LIB libspdk_vmd.a 00:01:48.162 SYMLINK libspdk_idxd.so 00:01:48.420 SO libspdk_vmd.so.6.0 00:01:48.420 SYMLINK libspdk_vmd.so 00:01:48.420 LIB libspdk_jsonrpc.a 00:01:48.420 SO libspdk_jsonrpc.so.6.0 00:01:48.420 SYMLINK libspdk_jsonrpc.so 00:01:48.684 CC lib/rpc/rpc.o 00:01:48.943 LIB libspdk_rpc.a 00:01:48.943 SO libspdk_rpc.so.6.0 00:01:48.943 SYMLINK libspdk_rpc.so 00:01:49.200 CC lib/keyring/keyring.o 00:01:49.200 CC lib/trace/trace.o 00:01:49.200 CC lib/notify/notify.o 00:01:49.200 CC lib/trace/trace_flags.o 00:01:49.200 CC lib/keyring/keyring_rpc.o 00:01:49.200 CC lib/notify/notify_rpc.o 00:01:49.200 CC lib/trace/trace_rpc.o 00:01:49.458 LIB libspdk_notify.a 00:01:49.458 SO libspdk_notify.so.6.0 00:01:49.458 LIB libspdk_keyring.a 00:01:49.458 SYMLINK libspdk_notify.so 00:01:49.458 LIB libspdk_trace.a 00:01:49.458 SO libspdk_keyring.so.1.0 00:01:49.458 SO libspdk_trace.so.10.0 00:01:49.458 SYMLINK libspdk_keyring.so 00:01:49.458 SYMLINK libspdk_trace.so 00:01:49.716 LIB libspdk_env_dpdk.a 00:01:49.716 CC lib/thread/thread.o 00:01:49.716 CC lib/thread/iobuf.o 00:01:49.716 CC lib/sock/sock.o 00:01:49.716 CC lib/sock/sock_rpc.o 00:01:49.716 SO libspdk_env_dpdk.so.14.1 00:01:49.973 SYMLINK libspdk_env_dpdk.so 00:01:49.973 LIB libspdk_sock.a 00:01:49.973 SO libspdk_sock.so.10.0 00:01:50.231 SYMLINK libspdk_sock.so 00:01:50.231 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:50.231 CC lib/nvme/nvme_ctrlr.o 00:01:50.231 CC lib/nvme/nvme_fabric.o 00:01:50.231 CC lib/nvme/nvme_ns_cmd.o 00:01:50.231 CC lib/nvme/nvme_ns.o 00:01:50.231 CC lib/nvme/nvme_pcie_common.o 00:01:50.231 CC lib/nvme/nvme_pcie.o 00:01:50.231 CC lib/nvme/nvme_qpair.o 00:01:50.231 CC lib/nvme/nvme.o 00:01:50.231 CC lib/nvme/nvme_quirks.o 00:01:50.231 CC lib/nvme/nvme_transport.o 00:01:50.231 CC lib/nvme/nvme_discovery.o 00:01:50.231 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:50.231 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:50.231 CC lib/nvme/nvme_tcp.o 00:01:50.231 CC lib/nvme/nvme_opal.o 00:01:50.231 CC lib/nvme/nvme_io_msg.o 00:01:50.231 CC lib/nvme/nvme_poll_group.o 00:01:50.231 CC lib/nvme/nvme_zns.o 00:01:50.231 CC lib/nvme/nvme_stubs.o 00:01:50.231 CC lib/nvme/nvme_auth.o 00:01:50.231 CC lib/nvme/nvme_cuse.o 00:01:50.231 CC lib/nvme/nvme_vfio_user.o 00:01:50.231 CC lib/nvme/nvme_rdma.o 00:01:51.163 LIB libspdk_thread.a 00:01:51.163 SO libspdk_thread.so.10.1 00:01:51.421 SYMLINK libspdk_thread.so 00:01:51.421 CC lib/blob/blobstore.o 00:01:51.421 CC lib/init/json_config.o 00:01:51.421 CC lib/vfu_tgt/tgt_endpoint.o 00:01:51.421 CC lib/accel/accel.o 00:01:51.421 CC lib/virtio/virtio.o 00:01:51.421 CC lib/blob/request.o 00:01:51.421 CC lib/init/subsystem.o 00:01:51.421 CC lib/accel/accel_rpc.o 00:01:51.421 CC lib/vfu_tgt/tgt_rpc.o 00:01:51.421 CC lib/virtio/virtio_vhost_user.o 00:01:51.421 CC lib/blob/zeroes.o 00:01:51.421 CC lib/accel/accel_sw.o 00:01:51.421 CC lib/init/subsystem_rpc.o 00:01:51.421 CC lib/blob/blob_bs_dev.o 00:01:51.421 CC lib/virtio/virtio_vfio_user.o 00:01:51.421 CC lib/init/rpc.o 00:01:51.421 CC lib/virtio/virtio_pci.o 00:01:51.680 LIB libspdk_init.a 00:01:51.680 SO libspdk_init.so.5.0 00:01:51.937 LIB libspdk_virtio.a 00:01:51.937 LIB libspdk_vfu_tgt.a 00:01:51.937 SYMLINK libspdk_init.so 00:01:51.937 SO libspdk_vfu_tgt.so.3.0 00:01:51.937 SO libspdk_virtio.so.7.0 00:01:51.937 SYMLINK libspdk_vfu_tgt.so 00:01:51.937 SYMLINK libspdk_virtio.so 00:01:51.937 CC lib/event/app.o 00:01:51.937 CC lib/event/reactor.o 00:01:51.937 CC lib/event/log_rpc.o 00:01:51.937 CC lib/event/app_rpc.o 00:01:51.937 CC lib/event/scheduler_static.o 00:01:52.503 LIB libspdk_event.a 00:01:52.503 SO libspdk_event.so.14.0 00:01:52.503 LIB libspdk_accel.a 00:01:52.503 SYMLINK libspdk_event.so 00:01:52.503 SO libspdk_accel.so.15.1 00:01:52.503 SYMLINK libspdk_accel.so 00:01:52.760 LIB libspdk_nvme.a 00:01:52.760 CC lib/bdev/bdev.o 00:01:52.760 CC lib/bdev/bdev_rpc.o 00:01:52.760 CC lib/bdev/bdev_zone.o 00:01:52.760 CC lib/bdev/part.o 00:01:52.760 CC lib/bdev/scsi_nvme.o 00:01:52.760 SO libspdk_nvme.so.13.1 00:01:53.018 SYMLINK libspdk_nvme.so 00:01:54.923 LIB libspdk_blob.a 00:01:54.923 SO libspdk_blob.so.11.0 00:01:54.923 SYMLINK libspdk_blob.so 00:01:54.923 CC lib/blobfs/blobfs.o 00:01:54.923 CC lib/lvol/lvol.o 00:01:54.923 CC lib/blobfs/tree.o 00:01:55.488 LIB libspdk_bdev.a 00:01:55.488 SO libspdk_bdev.so.15.1 00:01:55.488 SYMLINK libspdk_bdev.so 00:01:55.750 LIB libspdk_blobfs.a 00:01:55.750 CC lib/ftl/ftl_core.o 00:01:55.750 CC lib/ftl/ftl_init.o 00:01:55.750 CC lib/ftl/ftl_layout.o 00:01:55.750 CC lib/ublk/ublk.o 00:01:55.750 CC lib/ftl/ftl_debug.o 00:01:55.750 CC lib/ublk/ublk_rpc.o 00:01:55.750 CC lib/scsi/dev.o 00:01:55.750 CC lib/nbd/nbd.o 00:01:55.750 CC lib/nvmf/ctrlr.o 00:01:55.750 CC lib/ftl/ftl_io.o 00:01:55.750 CC lib/scsi/lun.o 00:01:55.750 CC lib/nbd/nbd_rpc.o 00:01:55.750 CC lib/nvmf/ctrlr_discovery.o 00:01:55.750 CC lib/ftl/ftl_sb.o 00:01:55.750 CC lib/scsi/port.o 00:01:55.750 CC lib/nvmf/ctrlr_bdev.o 00:01:55.750 CC lib/ftl/ftl_l2p.o 00:01:55.750 CC lib/scsi/scsi.o 00:01:55.750 CC lib/ftl/ftl_l2p_flat.o 00:01:55.750 CC lib/nvmf/subsystem.o 00:01:55.750 CC lib/scsi/scsi_bdev.o 00:01:55.751 CC lib/nvmf/nvmf.o 00:01:55.751 CC lib/scsi/scsi_pr.o 00:01:55.751 CC lib/ftl/ftl_nv_cache.o 00:01:55.751 CC lib/scsi/scsi_rpc.o 00:01:55.751 CC lib/nvmf/nvmf_rpc.o 00:01:55.751 CC lib/ftl/ftl_band.o 00:01:55.751 CC lib/scsi/task.o 00:01:55.751 CC lib/nvmf/transport.o 00:01:55.751 CC lib/ftl/ftl_band_ops.o 00:01:55.751 CC lib/nvmf/tcp.o 00:01:55.751 CC lib/ftl/ftl_writer.o 00:01:55.751 CC lib/ftl/ftl_rq.o 00:01:55.751 CC lib/nvmf/stubs.o 00:01:55.751 CC lib/ftl/ftl_reloc.o 00:01:55.751 CC lib/nvmf/mdns_server.o 00:01:55.751 CC lib/ftl/ftl_l2p_cache.o 00:01:55.751 CC lib/nvmf/vfio_user.o 00:01:55.751 CC lib/nvmf/rdma.o 00:01:55.751 CC lib/nvmf/auth.o 00:01:55.751 CC lib/ftl/ftl_p2l.o 00:01:55.751 CC lib/ftl/mngt/ftl_mngt.o 00:01:55.751 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:55.751 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:55.751 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:55.751 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:55.751 SO libspdk_blobfs.so.10.0 00:01:56.014 SYMLINK libspdk_blobfs.so 00:01:56.014 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:56.014 LIB libspdk_lvol.a 00:01:56.014 SO libspdk_lvol.so.10.0 00:01:56.014 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:56.014 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:56.014 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:56.014 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:56.014 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:56.014 SYMLINK libspdk_lvol.so 00:01:56.014 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:56.014 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:56.276 CC lib/ftl/utils/ftl_conf.o 00:01:56.276 CC lib/ftl/utils/ftl_md.o 00:01:56.276 CC lib/ftl/utils/ftl_mempool.o 00:01:56.276 CC lib/ftl/utils/ftl_bitmap.o 00:01:56.276 CC lib/ftl/utils/ftl_property.o 00:01:56.276 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:56.276 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:56.276 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:56.276 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:56.276 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:56.276 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:56.276 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:56.276 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:56.276 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:56.276 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:56.276 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:56.276 CC lib/ftl/base/ftl_base_dev.o 00:01:56.533 CC lib/ftl/base/ftl_base_bdev.o 00:01:56.533 CC lib/ftl/ftl_trace.o 00:01:56.533 LIB libspdk_nbd.a 00:01:56.533 SO libspdk_nbd.so.7.0 00:01:56.533 SYMLINK libspdk_nbd.so 00:01:56.533 LIB libspdk_scsi.a 00:01:56.791 SO libspdk_scsi.so.9.0 00:01:56.791 LIB libspdk_ublk.a 00:01:56.791 SO libspdk_ublk.so.3.0 00:01:56.791 SYMLINK libspdk_scsi.so 00:01:56.791 SYMLINK libspdk_ublk.so 00:01:57.048 CC lib/vhost/vhost.o 00:01:57.048 CC lib/iscsi/conn.o 00:01:57.048 CC lib/vhost/vhost_rpc.o 00:01:57.048 CC lib/iscsi/init_grp.o 00:01:57.048 CC lib/vhost/vhost_scsi.o 00:01:57.048 CC lib/vhost/vhost_blk.o 00:01:57.048 CC lib/iscsi/iscsi.o 00:01:57.048 CC lib/iscsi/md5.o 00:01:57.048 CC lib/vhost/rte_vhost_user.o 00:01:57.048 CC lib/iscsi/param.o 00:01:57.048 CC lib/iscsi/portal_grp.o 00:01:57.048 CC lib/iscsi/tgt_node.o 00:01:57.048 CC lib/iscsi/iscsi_subsystem.o 00:01:57.048 CC lib/iscsi/iscsi_rpc.o 00:01:57.048 CC lib/iscsi/task.o 00:01:57.304 LIB libspdk_ftl.a 00:01:57.304 SO libspdk_ftl.so.9.0 00:01:57.868 SYMLINK libspdk_ftl.so 00:01:58.124 LIB libspdk_vhost.a 00:01:58.124 SO libspdk_vhost.so.8.0 00:01:58.382 SYMLINK libspdk_vhost.so 00:01:58.382 LIB libspdk_nvmf.a 00:01:58.383 LIB libspdk_iscsi.a 00:01:58.383 SO libspdk_nvmf.so.18.1 00:01:58.383 SO libspdk_iscsi.so.8.0 00:01:58.641 SYMLINK libspdk_iscsi.so 00:01:58.641 SYMLINK libspdk_nvmf.so 00:01:58.898 CC module/vfu_device/vfu_virtio.o 00:01:58.898 CC module/env_dpdk/env_dpdk_rpc.o 00:01:58.898 CC module/vfu_device/vfu_virtio_blk.o 00:01:58.898 CC module/vfu_device/vfu_virtio_scsi.o 00:01:58.898 CC module/vfu_device/vfu_virtio_rpc.o 00:01:58.898 CC module/blob/bdev/blob_bdev.o 00:01:58.898 CC module/accel/ioat/accel_ioat.o 00:01:58.898 CC module/keyring/linux/keyring.o 00:01:58.898 CC module/accel/error/accel_error.o 00:01:58.898 CC module/keyring/file/keyring.o 00:01:58.898 CC module/scheduler/gscheduler/gscheduler.o 00:01:58.898 CC module/accel/ioat/accel_ioat_rpc.o 00:01:58.898 CC module/keyring/file/keyring_rpc.o 00:01:58.898 CC module/sock/posix/posix.o 00:01:58.898 CC module/accel/dsa/accel_dsa.o 00:01:58.898 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:58.898 CC module/keyring/linux/keyring_rpc.o 00:01:58.898 CC module/accel/error/accel_error_rpc.o 00:01:58.898 CC module/accel/dsa/accel_dsa_rpc.o 00:01:58.898 CC module/accel/iaa/accel_iaa.o 00:01:58.898 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:58.898 CC module/accel/iaa/accel_iaa_rpc.o 00:01:58.898 LIB libspdk_env_dpdk_rpc.a 00:01:59.155 SO libspdk_env_dpdk_rpc.so.6.0 00:01:59.155 SYMLINK libspdk_env_dpdk_rpc.so 00:01:59.155 LIB libspdk_keyring_linux.a 00:01:59.155 LIB libspdk_keyring_file.a 00:01:59.155 LIB libspdk_scheduler_gscheduler.a 00:01:59.155 LIB libspdk_scheduler_dpdk_governor.a 00:01:59.155 SO libspdk_keyring_linux.so.1.0 00:01:59.155 SO libspdk_keyring_file.so.1.0 00:01:59.155 SO libspdk_scheduler_gscheduler.so.4.0 00:01:59.155 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:59.155 LIB libspdk_accel_error.a 00:01:59.155 LIB libspdk_accel_ioat.a 00:01:59.155 LIB libspdk_scheduler_dynamic.a 00:01:59.155 LIB libspdk_accel_iaa.a 00:01:59.155 SO libspdk_accel_error.so.2.0 00:01:59.155 SO libspdk_accel_ioat.so.6.0 00:01:59.155 SYMLINK libspdk_keyring_linux.so 00:01:59.155 SO libspdk_scheduler_dynamic.so.4.0 00:01:59.155 SYMLINK libspdk_scheduler_gscheduler.so 00:01:59.155 SYMLINK libspdk_keyring_file.so 00:01:59.155 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:59.155 SO libspdk_accel_iaa.so.3.0 00:01:59.155 LIB libspdk_accel_dsa.a 00:01:59.155 SYMLINK libspdk_accel_error.so 00:01:59.155 SYMLINK libspdk_accel_ioat.so 00:01:59.155 SYMLINK libspdk_scheduler_dynamic.so 00:01:59.155 LIB libspdk_blob_bdev.a 00:01:59.413 SO libspdk_accel_dsa.so.5.0 00:01:59.413 SYMLINK libspdk_accel_iaa.so 00:01:59.413 SO libspdk_blob_bdev.so.11.0 00:01:59.413 SYMLINK libspdk_accel_dsa.so 00:01:59.413 SYMLINK libspdk_blob_bdev.so 00:01:59.671 LIB libspdk_vfu_device.a 00:01:59.671 SO libspdk_vfu_device.so.3.0 00:01:59.671 CC module/bdev/error/vbdev_error.o 00:01:59.671 CC module/bdev/error/vbdev_error_rpc.o 00:01:59.671 CC module/bdev/gpt/gpt.o 00:01:59.671 CC module/bdev/nvme/bdev_nvme.o 00:01:59.671 CC module/bdev/gpt/vbdev_gpt.o 00:01:59.671 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:59.671 CC module/bdev/delay/vbdev_delay.o 00:01:59.671 CC module/bdev/null/bdev_null.o 00:01:59.671 CC module/bdev/null/bdev_null_rpc.o 00:01:59.671 CC module/bdev/nvme/nvme_rpc.o 00:01:59.671 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:59.671 CC module/bdev/lvol/vbdev_lvol.o 00:01:59.671 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:59.671 CC module/blobfs/bdev/blobfs_bdev.o 00:01:59.671 CC module/bdev/malloc/bdev_malloc.o 00:01:59.671 CC module/bdev/ftl/bdev_ftl.o 00:01:59.671 CC module/bdev/split/vbdev_split.o 00:01:59.671 CC module/bdev/raid/bdev_raid.o 00:01:59.671 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:59.671 CC module/bdev/nvme/bdev_mdns_client.o 00:01:59.671 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:59.671 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:59.671 CC module/bdev/split/vbdev_split_rpc.o 00:01:59.671 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:59.671 CC module/bdev/raid/bdev_raid_rpc.o 00:01:59.671 CC module/bdev/nvme/vbdev_opal.o 00:01:59.671 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:59.671 CC module/bdev/raid/bdev_raid_sb.o 00:01:59.671 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:59.671 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:59.671 CC module/bdev/raid/raid0.o 00:01:59.671 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:59.671 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:59.671 CC module/bdev/raid/raid1.o 00:01:59.671 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:59.671 CC module/bdev/passthru/vbdev_passthru.o 00:01:59.671 CC module/bdev/raid/concat.o 00:01:59.671 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:59.671 CC module/bdev/aio/bdev_aio.o 00:01:59.671 CC module/bdev/aio/bdev_aio_rpc.o 00:01:59.671 CC module/bdev/iscsi/bdev_iscsi.o 00:01:59.671 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:59.671 SYMLINK libspdk_vfu_device.so 00:01:59.671 LIB libspdk_sock_posix.a 00:01:59.929 SO libspdk_sock_posix.so.6.0 00:01:59.929 SYMLINK libspdk_sock_posix.so 00:01:59.929 LIB libspdk_blobfs_bdev.a 00:01:59.929 SO libspdk_blobfs_bdev.so.6.0 00:01:59.929 LIB libspdk_bdev_split.a 00:01:59.929 LIB libspdk_bdev_null.a 00:01:59.929 SO libspdk_bdev_split.so.6.0 00:02:00.188 LIB libspdk_bdev_error.a 00:02:00.188 SO libspdk_bdev_null.so.6.0 00:02:00.188 LIB libspdk_bdev_ftl.a 00:02:00.188 SYMLINK libspdk_blobfs_bdev.so 00:02:00.188 LIB libspdk_bdev_passthru.a 00:02:00.188 SO libspdk_bdev_error.so.6.0 00:02:00.188 SO libspdk_bdev_ftl.so.6.0 00:02:00.188 LIB libspdk_bdev_zone_block.a 00:02:00.188 SYMLINK libspdk_bdev_split.so 00:02:00.188 LIB libspdk_bdev_gpt.a 00:02:00.188 SO libspdk_bdev_passthru.so.6.0 00:02:00.188 SYMLINK libspdk_bdev_null.so 00:02:00.188 SO libspdk_bdev_zone_block.so.6.0 00:02:00.188 SO libspdk_bdev_gpt.so.6.0 00:02:00.188 SYMLINK libspdk_bdev_error.so 00:02:00.188 SYMLINK libspdk_bdev_ftl.so 00:02:00.188 SYMLINK libspdk_bdev_passthru.so 00:02:00.188 LIB libspdk_bdev_malloc.a 00:02:00.188 LIB libspdk_bdev_aio.a 00:02:00.188 SYMLINK libspdk_bdev_zone_block.so 00:02:00.188 SYMLINK libspdk_bdev_gpt.so 00:02:00.188 LIB libspdk_bdev_iscsi.a 00:02:00.188 SO libspdk_bdev_aio.so.6.0 00:02:00.188 SO libspdk_bdev_malloc.so.6.0 00:02:00.188 LIB libspdk_bdev_delay.a 00:02:00.188 SO libspdk_bdev_iscsi.so.6.0 00:02:00.188 SO libspdk_bdev_delay.so.6.0 00:02:00.188 SYMLINK libspdk_bdev_aio.so 00:02:00.188 SYMLINK libspdk_bdev_malloc.so 00:02:00.188 SYMLINK libspdk_bdev_iscsi.so 00:02:00.188 SYMLINK libspdk_bdev_delay.so 00:02:00.188 LIB libspdk_bdev_lvol.a 00:02:00.446 SO libspdk_bdev_lvol.so.6.0 00:02:00.446 LIB libspdk_bdev_virtio.a 00:02:00.446 SYMLINK libspdk_bdev_lvol.so 00:02:00.446 SO libspdk_bdev_virtio.so.6.0 00:02:00.446 SYMLINK libspdk_bdev_virtio.so 00:02:00.704 LIB libspdk_bdev_raid.a 00:02:00.704 SO libspdk_bdev_raid.so.6.0 00:02:00.961 SYMLINK libspdk_bdev_raid.so 00:02:01.897 LIB libspdk_bdev_nvme.a 00:02:01.897 SO libspdk_bdev_nvme.so.7.0 00:02:02.194 SYMLINK libspdk_bdev_nvme.so 00:02:02.452 CC module/event/subsystems/sock/sock.o 00:02:02.452 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:02.452 CC module/event/subsystems/vmd/vmd.o 00:02:02.452 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:02.452 CC module/event/subsystems/iobuf/iobuf.o 00:02:02.452 CC module/event/subsystems/scheduler/scheduler.o 00:02:02.452 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:02.452 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:02.452 CC module/event/subsystems/keyring/keyring.o 00:02:02.452 LIB libspdk_event_keyring.a 00:02:02.452 LIB libspdk_event_vhost_blk.a 00:02:02.452 LIB libspdk_event_vfu_tgt.a 00:02:02.710 LIB libspdk_event_sock.a 00:02:02.710 LIB libspdk_event_vmd.a 00:02:02.710 LIB libspdk_event_scheduler.a 00:02:02.710 SO libspdk_event_keyring.so.1.0 00:02:02.710 SO libspdk_event_vhost_blk.so.3.0 00:02:02.710 LIB libspdk_event_iobuf.a 00:02:02.710 SO libspdk_event_vfu_tgt.so.3.0 00:02:02.710 SO libspdk_event_sock.so.5.0 00:02:02.710 SO libspdk_event_scheduler.so.4.0 00:02:02.710 SO libspdk_event_vmd.so.6.0 00:02:02.710 SO libspdk_event_iobuf.so.3.0 00:02:02.710 SYMLINK libspdk_event_keyring.so 00:02:02.710 SYMLINK libspdk_event_vhost_blk.so 00:02:02.710 SYMLINK libspdk_event_vfu_tgt.so 00:02:02.710 SYMLINK libspdk_event_sock.so 00:02:02.710 SYMLINK libspdk_event_scheduler.so 00:02:02.710 SYMLINK libspdk_event_vmd.so 00:02:02.710 SYMLINK libspdk_event_iobuf.so 00:02:02.967 CC module/event/subsystems/accel/accel.o 00:02:02.967 LIB libspdk_event_accel.a 00:02:02.967 SO libspdk_event_accel.so.6.0 00:02:03.223 SYMLINK libspdk_event_accel.so 00:02:03.223 CC module/event/subsystems/bdev/bdev.o 00:02:03.482 LIB libspdk_event_bdev.a 00:02:03.482 SO libspdk_event_bdev.so.6.0 00:02:03.482 SYMLINK libspdk_event_bdev.so 00:02:03.740 CC module/event/subsystems/scsi/scsi.o 00:02:03.740 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:03.740 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:03.740 CC module/event/subsystems/ublk/ublk.o 00:02:03.740 CC module/event/subsystems/nbd/nbd.o 00:02:03.740 LIB libspdk_event_nbd.a 00:02:03.740 LIB libspdk_event_ublk.a 00:02:03.740 LIB libspdk_event_scsi.a 00:02:03.740 SO libspdk_event_nbd.so.6.0 00:02:03.740 SO libspdk_event_ublk.so.3.0 00:02:03.998 SO libspdk_event_scsi.so.6.0 00:02:03.998 SYMLINK libspdk_event_nbd.so 00:02:03.998 SYMLINK libspdk_event_ublk.so 00:02:03.998 SYMLINK libspdk_event_scsi.so 00:02:03.998 LIB libspdk_event_nvmf.a 00:02:03.998 SO libspdk_event_nvmf.so.6.0 00:02:03.998 SYMLINK libspdk_event_nvmf.so 00:02:03.998 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:03.998 CC module/event/subsystems/iscsi/iscsi.o 00:02:04.257 LIB libspdk_event_vhost_scsi.a 00:02:04.257 LIB libspdk_event_iscsi.a 00:02:04.257 SO libspdk_event_vhost_scsi.so.3.0 00:02:04.257 SO libspdk_event_iscsi.so.6.0 00:02:04.257 SYMLINK libspdk_event_vhost_scsi.so 00:02:04.257 SYMLINK libspdk_event_iscsi.so 00:02:04.516 SO libspdk.so.6.0 00:02:04.516 SYMLINK libspdk.so 00:02:04.516 CC test/rpc_client/rpc_client_test.o 00:02:04.516 TEST_HEADER include/spdk/accel.h 00:02:04.516 TEST_HEADER include/spdk/accel_module.h 00:02:04.516 TEST_HEADER include/spdk/assert.h 00:02:04.516 TEST_HEADER include/spdk/barrier.h 00:02:04.516 TEST_HEADER include/spdk/base64.h 00:02:04.516 TEST_HEADER include/spdk/bdev.h 00:02:04.516 TEST_HEADER include/spdk/bdev_module.h 00:02:04.516 CC app/trace_record/trace_record.o 00:02:04.516 TEST_HEADER include/spdk/bdev_zone.h 00:02:04.516 CC app/spdk_top/spdk_top.o 00:02:04.516 TEST_HEADER include/spdk/bit_array.h 00:02:04.516 TEST_HEADER include/spdk/bit_pool.h 00:02:04.516 CC app/spdk_nvme_perf/perf.o 00:02:04.516 CXX app/trace/trace.o 00:02:04.516 CC app/spdk_nvme_discover/discovery_aer.o 00:02:04.516 TEST_HEADER include/spdk/blob_bdev.h 00:02:04.516 CC app/spdk_lspci/spdk_lspci.o 00:02:04.516 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:04.516 TEST_HEADER include/spdk/blobfs.h 00:02:04.516 TEST_HEADER include/spdk/blob.h 00:02:04.516 TEST_HEADER include/spdk/conf.h 00:02:04.516 TEST_HEADER include/spdk/config.h 00:02:04.516 TEST_HEADER include/spdk/cpuset.h 00:02:04.516 TEST_HEADER include/spdk/crc16.h 00:02:04.516 TEST_HEADER include/spdk/crc32.h 00:02:04.516 CC app/spdk_nvme_identify/identify.o 00:02:04.783 TEST_HEADER include/spdk/crc64.h 00:02:04.783 TEST_HEADER include/spdk/dif.h 00:02:04.783 TEST_HEADER include/spdk/dma.h 00:02:04.783 TEST_HEADER include/spdk/endian.h 00:02:04.783 TEST_HEADER include/spdk/env_dpdk.h 00:02:04.783 TEST_HEADER include/spdk/env.h 00:02:04.783 TEST_HEADER include/spdk/event.h 00:02:04.783 TEST_HEADER include/spdk/fd_group.h 00:02:04.783 TEST_HEADER include/spdk/fd.h 00:02:04.783 TEST_HEADER include/spdk/ftl.h 00:02:04.783 TEST_HEADER include/spdk/file.h 00:02:04.783 TEST_HEADER include/spdk/gpt_spec.h 00:02:04.783 TEST_HEADER include/spdk/hexlify.h 00:02:04.783 TEST_HEADER include/spdk/histogram_data.h 00:02:04.783 TEST_HEADER include/spdk/idxd.h 00:02:04.783 TEST_HEADER include/spdk/idxd_spec.h 00:02:04.783 TEST_HEADER include/spdk/init.h 00:02:04.783 TEST_HEADER include/spdk/ioat.h 00:02:04.783 TEST_HEADER include/spdk/ioat_spec.h 00:02:04.783 TEST_HEADER include/spdk/iscsi_spec.h 00:02:04.783 TEST_HEADER include/spdk/json.h 00:02:04.783 TEST_HEADER include/spdk/jsonrpc.h 00:02:04.783 TEST_HEADER include/spdk/keyring.h 00:02:04.783 TEST_HEADER include/spdk/keyring_module.h 00:02:04.783 TEST_HEADER include/spdk/likely.h 00:02:04.783 TEST_HEADER include/spdk/log.h 00:02:04.783 TEST_HEADER include/spdk/lvol.h 00:02:04.783 TEST_HEADER include/spdk/memory.h 00:02:04.783 TEST_HEADER include/spdk/mmio.h 00:02:04.783 TEST_HEADER include/spdk/nbd.h 00:02:04.783 TEST_HEADER include/spdk/notify.h 00:02:04.783 TEST_HEADER include/spdk/nvme.h 00:02:04.783 TEST_HEADER include/spdk/nvme_intel.h 00:02:04.783 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:04.783 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:04.783 TEST_HEADER include/spdk/nvme_spec.h 00:02:04.783 TEST_HEADER include/spdk/nvme_zns.h 00:02:04.783 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:04.783 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:04.783 TEST_HEADER include/spdk/nvmf.h 00:02:04.783 TEST_HEADER include/spdk/nvmf_spec.h 00:02:04.783 TEST_HEADER include/spdk/nvmf_transport.h 00:02:04.783 TEST_HEADER include/spdk/opal.h 00:02:04.783 TEST_HEADER include/spdk/opal_spec.h 00:02:04.783 TEST_HEADER include/spdk/pci_ids.h 00:02:04.783 TEST_HEADER include/spdk/pipe.h 00:02:04.783 TEST_HEADER include/spdk/queue.h 00:02:04.784 TEST_HEADER include/spdk/reduce.h 00:02:04.784 TEST_HEADER include/spdk/rpc.h 00:02:04.784 TEST_HEADER include/spdk/scheduler.h 00:02:04.784 TEST_HEADER include/spdk/scsi.h 00:02:04.784 TEST_HEADER include/spdk/sock.h 00:02:04.784 TEST_HEADER include/spdk/scsi_spec.h 00:02:04.784 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:04.784 TEST_HEADER include/spdk/stdinc.h 00:02:04.784 TEST_HEADER include/spdk/string.h 00:02:04.784 TEST_HEADER include/spdk/thread.h 00:02:04.784 TEST_HEADER include/spdk/trace_parser.h 00:02:04.784 TEST_HEADER include/spdk/trace.h 00:02:04.784 TEST_HEADER include/spdk/tree.h 00:02:04.784 TEST_HEADER include/spdk/ublk.h 00:02:04.784 TEST_HEADER include/spdk/util.h 00:02:04.784 TEST_HEADER include/spdk/uuid.h 00:02:04.784 TEST_HEADER include/spdk/version.h 00:02:04.784 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:04.784 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:04.784 TEST_HEADER include/spdk/vhost.h 00:02:04.784 TEST_HEADER include/spdk/vmd.h 00:02:04.784 TEST_HEADER include/spdk/xor.h 00:02:04.784 TEST_HEADER include/spdk/zipf.h 00:02:04.784 CXX test/cpp_headers/accel.o 00:02:04.784 CXX test/cpp_headers/accel_module.o 00:02:04.784 CXX test/cpp_headers/assert.o 00:02:04.784 CXX test/cpp_headers/barrier.o 00:02:04.784 CXX test/cpp_headers/base64.o 00:02:04.784 CXX test/cpp_headers/bdev.o 00:02:04.784 CXX test/cpp_headers/bdev_module.o 00:02:04.784 CXX test/cpp_headers/bdev_zone.o 00:02:04.784 CXX test/cpp_headers/bit_array.o 00:02:04.784 CXX test/cpp_headers/bit_pool.o 00:02:04.784 CXX test/cpp_headers/blob_bdev.o 00:02:04.784 CXX test/cpp_headers/blobfs_bdev.o 00:02:04.784 CXX test/cpp_headers/blobfs.o 00:02:04.784 CXX test/cpp_headers/blob.o 00:02:04.784 CXX test/cpp_headers/conf.o 00:02:04.784 CXX test/cpp_headers/config.o 00:02:04.784 CXX test/cpp_headers/cpuset.o 00:02:04.784 CXX test/cpp_headers/crc16.o 00:02:04.784 CC app/iscsi_tgt/iscsi_tgt.o 00:02:04.784 CC app/spdk_dd/spdk_dd.o 00:02:04.784 CC app/nvmf_tgt/nvmf_main.o 00:02:04.784 CXX test/cpp_headers/crc32.o 00:02:04.784 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:04.784 CC examples/ioat/perf/perf.o 00:02:04.784 CC test/app/jsoncat/jsoncat.o 00:02:04.784 CC examples/ioat/verify/verify.o 00:02:04.784 CC examples/util/zipf/zipf.o 00:02:04.784 CC test/env/vtophys/vtophys.o 00:02:04.784 CC test/app/stub/stub.o 00:02:04.784 CC test/thread/poller_perf/poller_perf.o 00:02:04.784 CC test/env/pci/pci_ut.o 00:02:04.784 CC test/app/histogram_perf/histogram_perf.o 00:02:04.784 CC app/spdk_tgt/spdk_tgt.o 00:02:04.784 CC test/env/memory/memory_ut.o 00:02:04.784 CC app/fio/nvme/fio_plugin.o 00:02:04.784 CC test/dma/test_dma/test_dma.o 00:02:04.784 CC test/app/bdev_svc/bdev_svc.o 00:02:04.784 CC app/fio/bdev/fio_plugin.o 00:02:05.045 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:05.045 LINK spdk_lspci 00:02:05.045 CC test/env/mem_callbacks/mem_callbacks.o 00:02:05.045 LINK rpc_client_test 00:02:05.045 LINK jsoncat 00:02:05.045 LINK spdk_nvme_discover 00:02:05.045 LINK vtophys 00:02:05.045 LINK interrupt_tgt 00:02:05.045 LINK poller_perf 00:02:05.045 LINK zipf 00:02:05.045 LINK env_dpdk_post_init 00:02:05.045 CXX test/cpp_headers/crc64.o 00:02:05.045 CXX test/cpp_headers/dif.o 00:02:05.045 CXX test/cpp_headers/dma.o 00:02:05.045 CXX test/cpp_headers/endian.o 00:02:05.045 LINK histogram_perf 00:02:05.045 CXX test/cpp_headers/env_dpdk.o 00:02:05.045 CXX test/cpp_headers/env.o 00:02:05.045 CXX test/cpp_headers/event.o 00:02:05.316 CXX test/cpp_headers/fd_group.o 00:02:05.316 LINK spdk_trace_record 00:02:05.316 CXX test/cpp_headers/fd.o 00:02:05.316 LINK nvmf_tgt 00:02:05.316 LINK iscsi_tgt 00:02:05.316 CXX test/cpp_headers/file.o 00:02:05.316 CXX test/cpp_headers/ftl.o 00:02:05.316 CXX test/cpp_headers/gpt_spec.o 00:02:05.316 LINK stub 00:02:05.316 LINK verify 00:02:05.316 LINK bdev_svc 00:02:05.316 CXX test/cpp_headers/hexlify.o 00:02:05.316 CXX test/cpp_headers/histogram_data.o 00:02:05.316 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:05.316 CXX test/cpp_headers/idxd.o 00:02:05.316 LINK ioat_perf 00:02:05.316 LINK spdk_tgt 00:02:05.316 CXX test/cpp_headers/idxd_spec.o 00:02:05.316 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:05.316 CXX test/cpp_headers/init.o 00:02:05.316 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:05.316 CXX test/cpp_headers/ioat.o 00:02:05.316 CXX test/cpp_headers/ioat_spec.o 00:02:05.316 CXX test/cpp_headers/iscsi_spec.o 00:02:05.316 CXX test/cpp_headers/json.o 00:02:05.316 CXX test/cpp_headers/jsonrpc.o 00:02:05.316 CXX test/cpp_headers/keyring.o 00:02:05.575 CXX test/cpp_headers/keyring_module.o 00:02:05.575 CXX test/cpp_headers/likely.o 00:02:05.575 LINK spdk_dd 00:02:05.575 CXX test/cpp_headers/log.o 00:02:05.575 CXX test/cpp_headers/lvol.o 00:02:05.575 CXX test/cpp_headers/memory.o 00:02:05.575 CXX test/cpp_headers/mmio.o 00:02:05.575 CXX test/cpp_headers/nbd.o 00:02:05.575 LINK spdk_trace 00:02:05.575 CXX test/cpp_headers/notify.o 00:02:05.575 CXX test/cpp_headers/nvme.o 00:02:05.575 CXX test/cpp_headers/nvme_ocssd.o 00:02:05.575 CXX test/cpp_headers/nvme_intel.o 00:02:05.575 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:05.575 LINK pci_ut 00:02:05.575 CXX test/cpp_headers/nvme_spec.o 00:02:05.575 CXX test/cpp_headers/nvme_zns.o 00:02:05.575 CXX test/cpp_headers/nvmf_cmd.o 00:02:05.575 LINK test_dma 00:02:05.575 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:05.575 CXX test/cpp_headers/nvmf.o 00:02:05.575 CXX test/cpp_headers/nvmf_spec.o 00:02:05.575 CXX test/cpp_headers/nvmf_transport.o 00:02:05.575 CXX test/cpp_headers/opal.o 00:02:05.575 CXX test/cpp_headers/opal_spec.o 00:02:05.834 LINK nvme_fuzz 00:02:05.834 CXX test/cpp_headers/pci_ids.o 00:02:05.834 CXX test/cpp_headers/pipe.o 00:02:05.834 CXX test/cpp_headers/queue.o 00:02:05.834 CXX test/cpp_headers/reduce.o 00:02:05.834 CC examples/sock/hello_world/hello_sock.o 00:02:05.834 CC test/event/event_perf/event_perf.o 00:02:05.834 CC test/event/reactor_perf/reactor_perf.o 00:02:05.834 CXX test/cpp_headers/rpc.o 00:02:05.834 CC test/event/reactor/reactor.o 00:02:05.834 LINK spdk_bdev 00:02:05.834 CC examples/thread/thread/thread_ex.o 00:02:05.834 LINK spdk_nvme 00:02:05.834 CXX test/cpp_headers/scheduler.o 00:02:05.834 CC test/event/app_repeat/app_repeat.o 00:02:05.834 CC examples/idxd/perf/perf.o 00:02:05.834 CC examples/vmd/lsvmd/lsvmd.o 00:02:05.834 CXX test/cpp_headers/scsi.o 00:02:05.834 CC examples/vmd/led/led.o 00:02:05.834 CXX test/cpp_headers/scsi_spec.o 00:02:05.834 CXX test/cpp_headers/sock.o 00:02:06.097 CXX test/cpp_headers/stdinc.o 00:02:06.097 CC test/event/scheduler/scheduler.o 00:02:06.097 CXX test/cpp_headers/string.o 00:02:06.097 CXX test/cpp_headers/thread.o 00:02:06.097 CXX test/cpp_headers/trace.o 00:02:06.097 CXX test/cpp_headers/tree.o 00:02:06.097 CXX test/cpp_headers/trace_parser.o 00:02:06.097 CXX test/cpp_headers/ublk.o 00:02:06.097 CXX test/cpp_headers/util.o 00:02:06.097 CXX test/cpp_headers/uuid.o 00:02:06.097 CXX test/cpp_headers/version.o 00:02:06.097 CXX test/cpp_headers/vfio_user_pci.o 00:02:06.097 CXX test/cpp_headers/vfio_user_spec.o 00:02:06.097 CXX test/cpp_headers/vhost.o 00:02:06.097 CXX test/cpp_headers/vmd.o 00:02:06.097 CXX test/cpp_headers/xor.o 00:02:06.097 CXX test/cpp_headers/zipf.o 00:02:06.097 LINK event_perf 00:02:06.097 LINK reactor 00:02:06.097 LINK reactor_perf 00:02:06.097 LINK vhost_fuzz 00:02:06.097 CC app/vhost/vhost.o 00:02:06.097 LINK mem_callbacks 00:02:06.097 LINK spdk_nvme_perf 00:02:06.097 LINK lsvmd 00:02:06.098 LINK app_repeat 00:02:06.358 LINK led 00:02:06.358 LINK spdk_nvme_identify 00:02:06.358 LINK spdk_top 00:02:06.358 LINK hello_sock 00:02:06.358 CC test/nvme/err_injection/err_injection.o 00:02:06.358 CC test/nvme/e2edp/nvme_dp.o 00:02:06.358 CC test/nvme/aer/aer.o 00:02:06.358 CC test/nvme/overhead/overhead.o 00:02:06.358 CC test/nvme/startup/startup.o 00:02:06.358 CC test/nvme/sgl/sgl.o 00:02:06.358 CC test/nvme/reset/reset.o 00:02:06.358 LINK thread 00:02:06.358 CC test/nvme/simple_copy/simple_copy.o 00:02:06.358 CC test/nvme/reserve/reserve.o 00:02:06.358 CC test/blobfs/mkfs/mkfs.o 00:02:06.358 CC test/accel/dif/dif.o 00:02:06.358 CC test/nvme/connect_stress/connect_stress.o 00:02:06.358 LINK scheduler 00:02:06.358 CC test/nvme/boot_partition/boot_partition.o 00:02:06.358 CC test/nvme/compliance/nvme_compliance.o 00:02:06.618 CC test/nvme/fused_ordering/fused_ordering.o 00:02:06.618 CC test/lvol/esnap/esnap.o 00:02:06.618 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:06.618 CC test/nvme/fdp/fdp.o 00:02:06.618 CC test/nvme/cuse/cuse.o 00:02:06.618 LINK vhost 00:02:06.618 LINK idxd_perf 00:02:06.618 LINK err_injection 00:02:06.618 LINK boot_partition 00:02:06.618 LINK reserve 00:02:06.618 LINK startup 00:02:06.618 LINK mkfs 00:02:06.618 LINK nvme_dp 00:02:06.618 LINK doorbell_aers 00:02:06.877 LINK fused_ordering 00:02:06.877 LINK simple_copy 00:02:06.877 LINK overhead 00:02:06.877 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:06.877 CC examples/nvme/reconnect/reconnect.o 00:02:06.877 CC examples/nvme/abort/abort.o 00:02:06.877 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:06.877 CC examples/nvme/hotplug/hotplug.o 00:02:06.877 CC examples/nvme/hello_world/hello_world.o 00:02:06.877 CC examples/nvme/arbitration/arbitration.o 00:02:06.877 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:06.877 LINK connect_stress 00:02:06.877 LINK aer 00:02:06.877 LINK reset 00:02:06.877 LINK sgl 00:02:06.877 LINK nvme_compliance 00:02:06.877 LINK fdp 00:02:06.877 CC examples/accel/perf/accel_perf.o 00:02:07.135 LINK memory_ut 00:02:07.135 CC examples/blob/cli/blobcli.o 00:02:07.135 CC examples/blob/hello_world/hello_blob.o 00:02:07.135 LINK cmb_copy 00:02:07.135 LINK pmr_persistence 00:02:07.135 LINK dif 00:02:07.135 LINK hello_world 00:02:07.135 LINK hotplug 00:02:07.135 LINK reconnect 00:02:07.135 LINK arbitration 00:02:07.135 LINK abort 00:02:07.393 LINK hello_blob 00:02:07.393 LINK nvme_manage 00:02:07.393 LINK accel_perf 00:02:07.393 CC test/bdev/bdevio/bdevio.o 00:02:07.650 LINK blobcli 00:02:07.650 LINK iscsi_fuzz 00:02:07.908 CC examples/bdev/hello_world/hello_bdev.o 00:02:07.908 CC examples/bdev/bdevperf/bdevperf.o 00:02:07.908 LINK bdevio 00:02:08.166 LINK hello_bdev 00:02:08.166 LINK cuse 00:02:08.733 LINK bdevperf 00:02:08.996 CC examples/nvmf/nvmf/nvmf.o 00:02:09.253 LINK nvmf 00:02:11.780 LINK esnap 00:02:12.040 00:02:12.040 real 0m48.885s 00:02:12.040 user 10m5.196s 00:02:12.040 sys 2m28.076s 00:02:12.040 10:15:06 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:12.040 10:15:06 make -- common/autotest_common.sh@10 -- $ set +x 00:02:12.040 ************************************ 00:02:12.040 END TEST make 00:02:12.040 ************************************ 00:02:12.040 10:15:06 -- common/autotest_common.sh@1142 -- $ return 0 00:02:12.040 10:15:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:12.040 10:15:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:12.040 10:15:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:12.040 10:15:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.040 10:15:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:12.040 10:15:06 -- pm/common@44 -- $ pid=2098250 00:02:12.040 10:15:06 -- pm/common@50 -- $ kill -TERM 2098250 00:02:12.040 10:15:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.040 10:15:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:12.040 10:15:06 -- pm/common@44 -- $ pid=2098252 00:02:12.040 10:15:06 -- pm/common@50 -- $ kill -TERM 2098252 00:02:12.040 10:15:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.040 10:15:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:12.040 10:15:06 -- pm/common@44 -- $ pid=2098254 00:02:12.040 10:15:06 -- pm/common@50 -- $ kill -TERM 2098254 00:02:12.040 10:15:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.040 10:15:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:12.040 10:15:06 -- pm/common@44 -- $ pid=2098280 00:02:12.040 10:15:06 -- pm/common@50 -- $ sudo -E kill -TERM 2098280 00:02:12.040 10:15:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:12.040 10:15:06 -- nvmf/common.sh@7 -- # uname -s 00:02:12.040 10:15:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:12.040 10:15:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:12.040 10:15:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:12.040 10:15:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:12.040 10:15:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:12.040 10:15:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:12.040 10:15:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:12.040 10:15:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:12.040 10:15:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:12.040 10:15:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:12.040 10:15:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:12.040 10:15:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:12.040 10:15:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:12.040 10:15:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:12.040 10:15:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:12.040 10:15:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:12.040 10:15:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:12.040 10:15:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:12.040 10:15:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.040 10:15:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.040 10:15:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.040 10:15:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.040 10:15:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.040 10:15:06 -- paths/export.sh@5 -- # export PATH 00:02:12.040 10:15:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.040 10:15:06 -- nvmf/common.sh@47 -- # : 0 00:02:12.040 10:15:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:12.040 10:15:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:12.040 10:15:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:12.040 10:15:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:12.040 10:15:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:12.040 10:15:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:12.040 10:15:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:12.040 10:15:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:12.040 10:15:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:12.040 10:15:06 -- spdk/autotest.sh@32 -- # uname -s 00:02:12.040 10:15:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:12.040 10:15:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:12.040 10:15:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:12.040 10:15:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:12.040 10:15:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:12.040 10:15:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:12.040 10:15:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:12.040 10:15:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:12.040 10:15:06 -- spdk/autotest.sh@48 -- # udevadm_pid=2153835 00:02:12.040 10:15:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:12.040 10:15:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:12.040 10:15:06 -- pm/common@17 -- # local monitor 00:02:12.040 10:15:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.040 10:15:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.040 10:15:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.040 10:15:06 -- pm/common@21 -- # date +%s 00:02:12.040 10:15:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.040 10:15:06 -- pm/common@21 -- # date +%s 00:02:12.040 10:15:06 -- pm/common@25 -- # sleep 1 00:02:12.040 10:15:06 -- pm/common@21 -- # date +%s 00:02:12.040 10:15:06 -- pm/common@21 -- # date +%s 00:02:12.040 10:15:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031306 00:02:12.040 10:15:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031306 00:02:12.040 10:15:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031306 00:02:12.040 10:15:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031306 00:02:12.040 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031306_collect-vmstat.pm.log 00:02:12.040 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031306_collect-cpu-load.pm.log 00:02:12.040 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031306_collect-cpu-temp.pm.log 00:02:12.040 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031306_collect-bmc-pm.bmc.pm.log 00:02:13.412 10:15:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:13.412 10:15:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:13.412 10:15:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:13.412 10:15:07 -- common/autotest_common.sh@10 -- # set +x 00:02:13.412 10:15:07 -- spdk/autotest.sh@59 -- # create_test_list 00:02:13.412 10:15:07 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:13.412 10:15:07 -- common/autotest_common.sh@10 -- # set +x 00:02:13.412 10:15:07 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:13.412 10:15:07 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.412 10:15:07 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.412 10:15:07 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:13.412 10:15:07 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.412 10:15:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:13.412 10:15:07 -- common/autotest_common.sh@1455 -- # uname 00:02:13.412 10:15:07 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:13.412 10:15:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:13.412 10:15:07 -- common/autotest_common.sh@1475 -- # uname 00:02:13.412 10:15:07 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:13.412 10:15:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:13.412 10:15:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:13.412 10:15:07 -- spdk/autotest.sh@72 -- # hash lcov 00:02:13.413 10:15:07 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:13.413 10:15:07 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:13.413 --rc lcov_branch_coverage=1 00:02:13.413 --rc lcov_function_coverage=1 00:02:13.413 --rc genhtml_branch_coverage=1 00:02:13.413 --rc genhtml_function_coverage=1 00:02:13.413 --rc genhtml_legend=1 00:02:13.413 --rc geninfo_all_blocks=1 00:02:13.413 ' 00:02:13.413 10:15:07 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:13.413 --rc lcov_branch_coverage=1 00:02:13.413 --rc lcov_function_coverage=1 00:02:13.413 --rc genhtml_branch_coverage=1 00:02:13.413 --rc genhtml_function_coverage=1 00:02:13.413 --rc genhtml_legend=1 00:02:13.413 --rc geninfo_all_blocks=1 00:02:13.413 ' 00:02:13.413 10:15:07 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:13.413 --rc lcov_branch_coverage=1 00:02:13.413 --rc lcov_function_coverage=1 00:02:13.413 --rc genhtml_branch_coverage=1 00:02:13.413 --rc genhtml_function_coverage=1 00:02:13.413 --rc genhtml_legend=1 00:02:13.413 --rc geninfo_all_blocks=1 00:02:13.413 --no-external' 00:02:13.413 10:15:07 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:13.413 --rc lcov_branch_coverage=1 00:02:13.413 --rc lcov_function_coverage=1 00:02:13.413 --rc genhtml_branch_coverage=1 00:02:13.413 --rc genhtml_function_coverage=1 00:02:13.413 --rc genhtml_legend=1 00:02:13.413 --rc geninfo_all_blocks=1 00:02:13.413 --no-external' 00:02:13.413 10:15:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:13.413 lcov: LCOV version 1.14 00:02:13.413 10:15:07 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:18.678 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:18.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:18.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:18.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:18.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:18.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:18.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:18.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:18.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:18.938 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:18.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:18.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:18.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:45.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:45.492 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:53.592 10:15:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:53.593 10:15:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:53.593 10:15:46 -- common/autotest_common.sh@10 -- # set +x 00:02:53.593 10:15:46 -- spdk/autotest.sh@91 -- # rm -f 00:02:53.593 10:15:46 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.593 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:53.593 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:53.593 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:53.593 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:53.593 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:53.593 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:53.593 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:53.593 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:53.593 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:53.593 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:53.593 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:53.593 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:53.593 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:53.851 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:53.851 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:53.851 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:53.851 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:53.851 10:15:48 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:53.851 10:15:48 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:53.851 10:15:48 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:53.851 10:15:48 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:53.851 10:15:48 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:53.851 10:15:48 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:53.851 10:15:48 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:53.851 10:15:48 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.851 10:15:48 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:53.851 10:15:48 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:53.851 10:15:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:53.851 10:15:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:53.851 10:15:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:53.851 10:15:48 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:53.851 10:15:48 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:53.851 No valid GPT data, bailing 00:02:53.851 10:15:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:53.851 10:15:48 -- scripts/common.sh@391 -- # pt= 00:02:53.851 10:15:48 -- scripts/common.sh@392 -- # return 1 00:02:53.851 10:15:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:53.851 1+0 records in 00:02:53.851 1+0 records out 00:02:53.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00175138 s, 599 MB/s 00:02:53.851 10:15:48 -- spdk/autotest.sh@118 -- # sync 00:02:53.851 10:15:48 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:53.851 10:15:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:53.851 10:15:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:55.751 10:15:50 -- spdk/autotest.sh@124 -- # uname -s 00:02:55.751 10:15:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:55.751 10:15:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:55.751 10:15:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:55.751 10:15:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.751 10:15:50 -- common/autotest_common.sh@10 -- # set +x 00:02:55.751 ************************************ 00:02:55.751 START TEST setup.sh 00:02:55.751 ************************************ 00:02:55.751 10:15:50 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:56.010 * Looking for test storage... 00:02:56.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:56.010 10:15:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:56.010 10:15:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:56.010 10:15:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:56.010 10:15:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.010 10:15:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.010 10:15:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:56.010 ************************************ 00:02:56.010 START TEST acl 00:02:56.010 ************************************ 00:02:56.010 10:15:50 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:56.010 * Looking for test storage... 00:02:56.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:56.010 10:15:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:56.010 10:15:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:56.010 10:15:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:56.010 10:15:50 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:56.010 10:15:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:56.010 10:15:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:56.010 10:15:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:56.010 10:15:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.010 10:15:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:56.010 10:15:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:56.010 10:15:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:56.010 10:15:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:56.010 10:15:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:56.010 10:15:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:56.010 10:15:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.010 10:15:50 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.407 10:15:51 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:57.407 10:15:51 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:57.407 10:15:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.407 10:15:51 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:57.407 10:15:51 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.407 10:15:51 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:58.342 Hugepages 00:02:58.342 node hugesize free / total 00:02:58.342 10:15:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.342 10:15:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.342 10:15:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.600 10:15:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.600 10:15:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.600 10:15:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.600 10:15:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 00:02:58.600 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:58.600 10:15:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:58.600 10:15:53 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.600 10:15:53 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.600 10:15:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.600 ************************************ 00:02:58.600 START TEST denied 00:02:58.600 ************************************ 00:02:58.600 10:15:53 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:58.600 10:15:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:58.600 10:15:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:58.600 10:15:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:58.601 10:15:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.601 10:15:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.973 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:59.973 10:15:54 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:59.973 10:15:54 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:59.973 10:15:54 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:59.973 10:15:54 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:59.973 10:15:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:59.973 10:15:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:59.973 10:15:54 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:59.973 10:15:54 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:59.973 10:15:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.973 10:15:54 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.501 00:03:02.501 real 0m3.722s 00:03:02.501 user 0m1.092s 00:03:02.501 sys 0m1.746s 00:03:02.501 10:15:56 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:02.501 10:15:56 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:02.501 ************************************ 00:03:02.501 END TEST denied 00:03:02.501 ************************************ 00:03:02.501 10:15:56 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:02.501 10:15:56 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:02.501 10:15:56 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.501 10:15:56 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.501 10:15:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:02.501 ************************************ 00:03:02.501 START TEST allowed 00:03:02.501 ************************************ 00:03:02.501 10:15:56 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:02.501 10:15:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:02.501 10:15:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:02.501 10:15:56 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:02.501 10:15:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.501 10:15:56 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.029 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:05.029 10:15:59 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:05.029 10:15:59 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:05.029 10:15:59 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:05.029 10:15:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.029 10:15:59 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.455 00:03:06.455 real 0m3.800s 00:03:06.455 user 0m1.045s 00:03:06.455 sys 0m1.603s 00:03:06.455 10:16:00 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.455 10:16:00 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:06.455 ************************************ 00:03:06.455 END TEST allowed 00:03:06.455 ************************************ 00:03:06.455 10:16:00 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:06.455 00:03:06.455 real 0m10.287s 00:03:06.455 user 0m3.238s 00:03:06.455 sys 0m5.075s 00:03:06.455 10:16:00 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.455 10:16:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:06.455 ************************************ 00:03:06.455 END TEST acl 00:03:06.455 ************************************ 00:03:06.455 10:16:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:06.455 10:16:00 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.455 10:16:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.455 10:16:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.455 10:16:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.455 ************************************ 00:03:06.455 START TEST hugepages 00:03:06.455 ************************************ 00:03:06.455 10:16:00 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.455 * Looking for test storage... 00:03:06.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43726336 kB' 'MemAvailable: 47230128 kB' 'Buffers: 2704 kB' 'Cached: 10275136 kB' 'SwapCached: 0 kB' 'Active: 7268868 kB' 'Inactive: 3506596 kB' 'Active(anon): 6874276 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500876 kB' 'Mapped: 173064 kB' 'Shmem: 6376652 kB' 'KReclaimable: 192888 kB' 'Slab: 560220 kB' 'SReclaimable: 192888 kB' 'SUnreclaim: 367332 kB' 'KernelStack: 12928 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 7989300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.455 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.456 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:06.457 10:16:00 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:06.457 10:16:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.457 10:16:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.457 10:16:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.457 ************************************ 00:03:06.457 START TEST default_setup 00:03:06.457 ************************************ 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.457 10:16:00 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.828 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:07.828 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:07.828 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:07.828 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:07.828 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:07.828 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:07.828 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:07.828 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:07.828 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:07.828 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:07.828 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:07.828 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:07.828 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:07.828 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:07.829 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:07.829 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:08.767 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45801296 kB' 'MemAvailable: 49305072 kB' 'Buffers: 2704 kB' 'Cached: 10275228 kB' 'SwapCached: 0 kB' 'Active: 7292756 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898164 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525144 kB' 'Mapped: 174004 kB' 'Shmem: 6376744 kB' 'KReclaimable: 192856 kB' 'Slab: 560236 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367380 kB' 'KernelStack: 12816 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8016556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196068 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45801300 kB' 'MemAvailable: 49305076 kB' 'Buffers: 2704 kB' 'Cached: 10275228 kB' 'SwapCached: 0 kB' 'Active: 7292764 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898172 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524704 kB' 'Mapped: 173952 kB' 'Shmem: 6376744 kB' 'KReclaimable: 192856 kB' 'Slab: 560196 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367340 kB' 'KernelStack: 12880 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8016576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196004 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.767 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45797444 kB' 'MemAvailable: 49301220 kB' 'Buffers: 2704 kB' 'Cached: 10275236 kB' 'SwapCached: 0 kB' 'Active: 7290184 kB' 'Inactive: 3506596 kB' 'Active(anon): 6895592 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522164 kB' 'Mapped: 173952 kB' 'Shmem: 6376752 kB' 'KReclaimable: 192856 kB' 'Slab: 560196 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367340 kB' 'KernelStack: 12848 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.768 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.769 nr_hugepages=1024 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.769 resv_hugepages=0 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.769 surplus_hugepages=0 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.769 anon_hugepages=0 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45792656 kB' 'MemAvailable: 49296432 kB' 'Buffers: 2704 kB' 'Cached: 10275256 kB' 'SwapCached: 0 kB' 'Active: 7292536 kB' 'Inactive: 3506596 kB' 'Active(anon): 6897944 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524464 kB' 'Mapped: 173864 kB' 'Shmem: 6376772 kB' 'KReclaimable: 192856 kB' 'Slab: 560220 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367364 kB' 'KernelStack: 12768 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8016620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195988 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.769 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21134200 kB' 'MemUsed: 11742740 kB' 'SwapCached: 0 kB' 'Active: 5423000 kB' 'Inactive: 3264144 kB' 'Active(anon): 5234428 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8394632 kB' 'Mapped: 58476 kB' 'AnonPages: 295668 kB' 'Shmem: 4941916 kB' 'KernelStack: 7416 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121484 kB' 'Slab: 312976 kB' 'SReclaimable: 121484 kB' 'SUnreclaim: 191492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.770 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:08.771 node0=1024 expecting 1024 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:08.771 00:03:08.771 real 0m2.417s 00:03:08.771 user 0m0.667s 00:03:08.771 sys 0m0.875s 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.771 10:16:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:08.771 ************************************ 00:03:08.771 END TEST default_setup 00:03:08.771 ************************************ 00:03:08.771 10:16:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:08.771 10:16:03 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:08.771 10:16:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.771 10:16:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.771 10:16:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:08.771 ************************************ 00:03:08.771 START TEST per_node_1G_alloc 00:03:08.771 ************************************ 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.771 10:16:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.151 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:10.151 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.151 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:10.151 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:10.151 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:10.151 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:10.151 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:10.151 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:10.151 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.151 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:10.151 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:10.151 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:10.151 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:10.151 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:10.151 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:10.151 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:10.151 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45815396 kB' 'MemAvailable: 49319172 kB' 'Buffers: 2704 kB' 'Cached: 10275340 kB' 'SwapCached: 0 kB' 'Active: 7287628 kB' 'Inactive: 3506596 kB' 'Active(anon): 6893036 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519480 kB' 'Mapped: 173184 kB' 'Shmem: 6376856 kB' 'KReclaimable: 192856 kB' 'Slab: 560268 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367412 kB' 'KernelStack: 12832 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8010676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.151 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45816344 kB' 'MemAvailable: 49320120 kB' 'Buffers: 2704 kB' 'Cached: 10275344 kB' 'SwapCached: 0 kB' 'Active: 7286968 kB' 'Inactive: 3506596 kB' 'Active(anon): 6892376 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518816 kB' 'Mapped: 173060 kB' 'Shmem: 6376860 kB' 'KReclaimable: 192856 kB' 'Slab: 560236 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367380 kB' 'KernelStack: 12816 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8010696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.152 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.153 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45816344 kB' 'MemAvailable: 49320120 kB' 'Buffers: 2704 kB' 'Cached: 10275360 kB' 'SwapCached: 0 kB' 'Active: 7287156 kB' 'Inactive: 3506596 kB' 'Active(anon): 6892564 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519008 kB' 'Mapped: 173060 kB' 'Shmem: 6376876 kB' 'KReclaimable: 192856 kB' 'Slab: 560236 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367380 kB' 'KernelStack: 12816 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8010716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.154 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.155 nr_hugepages=1024 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.155 resv_hugepages=0 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.155 surplus_hugepages=0 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.155 anon_hugepages=0 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45816344 kB' 'MemAvailable: 49320120 kB' 'Buffers: 2704 kB' 'Cached: 10275384 kB' 'SwapCached: 0 kB' 'Active: 7287192 kB' 'Inactive: 3506596 kB' 'Active(anon): 6892600 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519008 kB' 'Mapped: 173060 kB' 'Shmem: 6376900 kB' 'KReclaimable: 192856 kB' 'Slab: 560236 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367380 kB' 'KernelStack: 12816 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8010740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.155 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.156 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22190316 kB' 'MemUsed: 10686624 kB' 'SwapCached: 0 kB' 'Active: 5422504 kB' 'Inactive: 3264144 kB' 'Active(anon): 5233932 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8394740 kB' 'Mapped: 58464 kB' 'AnonPages: 295104 kB' 'Shmem: 4942024 kB' 'KernelStack: 7400 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121484 kB' 'Slab: 312720 kB' 'SReclaimable: 121484 kB' 'SUnreclaim: 191236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.157 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23626156 kB' 'MemUsed: 4038596 kB' 'SwapCached: 0 kB' 'Active: 1864704 kB' 'Inactive: 242452 kB' 'Active(anon): 1658684 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1883372 kB' 'Mapped: 114596 kB' 'AnonPages: 223904 kB' 'Shmem: 1434900 kB' 'KernelStack: 5432 kB' 'PageTables: 3480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71372 kB' 'Slab: 247516 kB' 'SReclaimable: 71372 kB' 'SUnreclaim: 176144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.158 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:10.159 node0=512 expecting 512 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:10.159 node1=512 expecting 512 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:10.159 00:03:10.159 real 0m1.326s 00:03:10.159 user 0m0.556s 00:03:10.159 sys 0m0.728s 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:10.159 10:16:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:10.159 ************************************ 00:03:10.159 END TEST per_node_1G_alloc 00:03:10.159 ************************************ 00:03:10.159 10:16:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:10.159 10:16:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:10.159 10:16:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.159 10:16:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.159 10:16:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.159 ************************************ 00:03:10.159 START TEST even_2G_alloc 00:03:10.159 ************************************ 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.159 10:16:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.541 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:11.541 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.541 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:11.541 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:11.541 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:11.541 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:11.541 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:11.541 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:11.541 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:11.541 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:11.541 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:11.541 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:11.541 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:11.541 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:11.541 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:11.541 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:11.541 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45812308 kB' 'MemAvailable: 49316084 kB' 'Buffers: 2704 kB' 'Cached: 10275476 kB' 'SwapCached: 0 kB' 'Active: 7287676 kB' 'Inactive: 3506596 kB' 'Active(anon): 6893084 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519312 kB' 'Mapped: 173104 kB' 'Shmem: 6376992 kB' 'KReclaimable: 192856 kB' 'Slab: 560384 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367528 kB' 'KernelStack: 12800 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8010940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.541 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.542 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45812208 kB' 'MemAvailable: 49315984 kB' 'Buffers: 2704 kB' 'Cached: 10275480 kB' 'SwapCached: 0 kB' 'Active: 7287488 kB' 'Inactive: 3506596 kB' 'Active(anon): 6892896 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519060 kB' 'Mapped: 173048 kB' 'Shmem: 6376996 kB' 'KReclaimable: 192856 kB' 'Slab: 560384 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367528 kB' 'KernelStack: 12816 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8010960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.543 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.544 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45812632 kB' 'MemAvailable: 49316408 kB' 'Buffers: 2704 kB' 'Cached: 10275496 kB' 'SwapCached: 0 kB' 'Active: 7287444 kB' 'Inactive: 3506596 kB' 'Active(anon): 6892852 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519072 kB' 'Mapped: 173048 kB' 'Shmem: 6377012 kB' 'KReclaimable: 192856 kB' 'Slab: 560488 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367632 kB' 'KernelStack: 12832 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8010980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.545 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.546 nr_hugepages=1024 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.546 resv_hugepages=0 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.546 surplus_hugepages=0 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.546 anon_hugepages=0 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.546 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45815336 kB' 'MemAvailable: 49319112 kB' 'Buffers: 2704 kB' 'Cached: 10275520 kB' 'SwapCached: 0 kB' 'Active: 7284320 kB' 'Inactive: 3506596 kB' 'Active(anon): 6889728 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515876 kB' 'Mapped: 172200 kB' 'Shmem: 6377036 kB' 'KReclaimable: 192856 kB' 'Slab: 560480 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367624 kB' 'KernelStack: 12768 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7995532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.547 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22193740 kB' 'MemUsed: 10683200 kB' 'SwapCached: 0 kB' 'Active: 5420744 kB' 'Inactive: 3264144 kB' 'Active(anon): 5232172 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8394864 kB' 'Mapped: 57716 kB' 'AnonPages: 293192 kB' 'Shmem: 4942148 kB' 'KernelStack: 7384 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121484 kB' 'Slab: 312932 kB' 'SReclaimable: 121484 kB' 'SUnreclaim: 191448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.548 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.549 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23621812 kB' 'MemUsed: 4042940 kB' 'SwapCached: 0 kB' 'Active: 1862996 kB' 'Inactive: 242452 kB' 'Active(anon): 1656976 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1883380 kB' 'Mapped: 114384 kB' 'AnonPages: 222140 kB' 'Shmem: 1434908 kB' 'KernelStack: 5352 kB' 'PageTables: 3244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71372 kB' 'Slab: 247544 kB' 'SReclaimable: 71372 kB' 'SUnreclaim: 176172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.550 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.551 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:11.809 node0=512 expecting 512 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:11.809 node1=512 expecting 512 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:11.809 00:03:11.809 real 0m1.456s 00:03:11.809 user 0m0.643s 00:03:11.809 sys 0m0.777s 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.809 10:16:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:11.809 ************************************ 00:03:11.809 END TEST even_2G_alloc 00:03:11.809 ************************************ 00:03:11.809 10:16:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:11.809 10:16:06 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:11.809 10:16:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.809 10:16:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.809 10:16:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.809 ************************************ 00:03:11.809 START TEST odd_alloc 00:03:11.809 ************************************ 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.809 10:16:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.744 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:12.744 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.744 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:12.744 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:12.744 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:12.744 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:12.744 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:12.744 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:12.744 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:12.744 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:12.744 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:12.744 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:12.744 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:12.744 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:12.744 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:12.744 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:12.744 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45804992 kB' 'MemAvailable: 49308768 kB' 'Buffers: 2704 kB' 'Cached: 10275608 kB' 'SwapCached: 0 kB' 'Active: 7283760 kB' 'Inactive: 3506596 kB' 'Active(anon): 6889168 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515272 kB' 'Mapped: 172160 kB' 'Shmem: 6377124 kB' 'KReclaimable: 192856 kB' 'Slab: 560172 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367316 kB' 'KernelStack: 12752 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7995600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.008 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.009 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45805652 kB' 'MemAvailable: 49309428 kB' 'Buffers: 2704 kB' 'Cached: 10275612 kB' 'SwapCached: 0 kB' 'Active: 7284124 kB' 'Inactive: 3506596 kB' 'Active(anon): 6889532 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515620 kB' 'Mapped: 172192 kB' 'Shmem: 6377128 kB' 'KReclaimable: 192856 kB' 'Slab: 560184 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367328 kB' 'KernelStack: 12752 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7995620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.010 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45806092 kB' 'MemAvailable: 49309868 kB' 'Buffers: 2704 kB' 'Cached: 10275628 kB' 'SwapCached: 0 kB' 'Active: 7283968 kB' 'Inactive: 3506596 kB' 'Active(anon): 6889376 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515428 kB' 'Mapped: 172112 kB' 'Shmem: 6377144 kB' 'KReclaimable: 192856 kB' 'Slab: 560176 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367320 kB' 'KernelStack: 12736 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7995640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.011 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.012 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:13.013 nr_hugepages=1025 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.013 resv_hugepages=0 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.013 surplus_hugepages=0 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.013 anon_hugepages=0 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45806092 kB' 'MemAvailable: 49309868 kB' 'Buffers: 2704 kB' 'Cached: 10275648 kB' 'SwapCached: 0 kB' 'Active: 7284020 kB' 'Inactive: 3506596 kB' 'Active(anon): 6889428 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515504 kB' 'Mapped: 172112 kB' 'Shmem: 6377164 kB' 'KReclaimable: 192856 kB' 'Slab: 560176 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367320 kB' 'KernelStack: 12768 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7995660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.013 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.014 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22204828 kB' 'MemUsed: 10672112 kB' 'SwapCached: 0 kB' 'Active: 5421296 kB' 'Inactive: 3264144 kB' 'Active(anon): 5232724 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8394992 kB' 'Mapped: 57716 kB' 'AnonPages: 293612 kB' 'Shmem: 4942276 kB' 'KernelStack: 7416 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121484 kB' 'Slab: 312860 kB' 'SReclaimable: 121484 kB' 'SUnreclaim: 191376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.015 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.016 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23601624 kB' 'MemUsed: 4063128 kB' 'SwapCached: 0 kB' 'Active: 1862748 kB' 'Inactive: 242452 kB' 'Active(anon): 1656728 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1883384 kB' 'Mapped: 114396 kB' 'AnonPages: 221900 kB' 'Shmem: 1434912 kB' 'KernelStack: 5352 kB' 'PageTables: 3196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71372 kB' 'Slab: 247316 kB' 'SReclaimable: 71372 kB' 'SUnreclaim: 175944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.017 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:13.018 node0=512 expecting 513 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:13.018 node1=513 expecting 512 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:13.018 00:03:13.018 real 0m1.396s 00:03:13.018 user 0m0.588s 00:03:13.018 sys 0m0.766s 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.018 10:16:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.018 ************************************ 00:03:13.018 END TEST odd_alloc 00:03:13.018 ************************************ 00:03:13.018 10:16:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:13.018 10:16:07 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:13.018 10:16:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.018 10:16:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.018 10:16:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.276 ************************************ 00:03:13.276 START TEST custom_alloc 00:03:13.276 ************************************ 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:13.276 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.277 10:16:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.209 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:14.209 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.209 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:14.209 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:14.209 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:14.209 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:14.209 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:14.209 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:14.209 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:14.209 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:14.209 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:14.209 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:14.209 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:14.209 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:14.209 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:14.209 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:14.209 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.475 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44761152 kB' 'MemAvailable: 48264928 kB' 'Buffers: 2704 kB' 'Cached: 10275744 kB' 'SwapCached: 0 kB' 'Active: 7285660 kB' 'Inactive: 3506596 kB' 'Active(anon): 6891068 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516984 kB' 'Mapped: 172608 kB' 'Shmem: 6377260 kB' 'KReclaimable: 192856 kB' 'Slab: 560276 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367420 kB' 'KernelStack: 12784 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7997748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.476 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44759072 kB' 'MemAvailable: 48262848 kB' 'Buffers: 2704 kB' 'Cached: 10275748 kB' 'SwapCached: 0 kB' 'Active: 7287908 kB' 'Inactive: 3506596 kB' 'Active(anon): 6893316 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519264 kB' 'Mapped: 172560 kB' 'Shmem: 6377264 kB' 'KReclaimable: 192856 kB' 'Slab: 560244 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367388 kB' 'KernelStack: 12752 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8000672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.477 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.478 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44755236 kB' 'MemAvailable: 48259012 kB' 'Buffers: 2704 kB' 'Cached: 10275760 kB' 'SwapCached: 0 kB' 'Active: 7289684 kB' 'Inactive: 3506596 kB' 'Active(anon): 6895092 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521060 kB' 'Mapped: 172912 kB' 'Shmem: 6377276 kB' 'KReclaimable: 192856 kB' 'Slab: 560288 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367432 kB' 'KernelStack: 12768 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 8002024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196068 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.479 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.480 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:14.481 nr_hugepages=1536 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.481 resv_hugepages=0 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.481 surplus_hugepages=0 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.481 anon_hugepages=0 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44755932 kB' 'MemAvailable: 48259708 kB' 'Buffers: 2704 kB' 'Cached: 10275788 kB' 'SwapCached: 0 kB' 'Active: 7285004 kB' 'Inactive: 3506596 kB' 'Active(anon): 6890412 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516404 kB' 'Mapped: 172476 kB' 'Shmem: 6377304 kB' 'KReclaimable: 192856 kB' 'Slab: 560288 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367432 kB' 'KernelStack: 12752 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7997676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.481 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.482 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22211032 kB' 'MemUsed: 10665908 kB' 'SwapCached: 0 kB' 'Active: 5421560 kB' 'Inactive: 3264144 kB' 'Active(anon): 5232988 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8395096 kB' 'Mapped: 58068 kB' 'AnonPages: 293856 kB' 'Shmem: 4942380 kB' 'KernelStack: 7400 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121484 kB' 'Slab: 312992 kB' 'SReclaimable: 121484 kB' 'SUnreclaim: 191508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.483 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22544900 kB' 'MemUsed: 5119852 kB' 'SwapCached: 0 kB' 'Active: 1868276 kB' 'Inactive: 242452 kB' 'Active(anon): 1662256 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 242452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1883396 kB' 'Mapped: 114408 kB' 'AnonPages: 227388 kB' 'Shmem: 1434924 kB' 'KernelStack: 5368 kB' 'PageTables: 3168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71372 kB' 'Slab: 247296 kB' 'SReclaimable: 71372 kB' 'SUnreclaim: 175924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.484 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:14.485 node0=512 expecting 512 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:14.485 node1=1024 expecting 1024 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:14.485 00:03:14.485 real 0m1.415s 00:03:14.485 user 0m0.601s 00:03:14.485 sys 0m0.776s 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.485 10:16:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:14.485 ************************************ 00:03:14.485 END TEST custom_alloc 00:03:14.485 ************************************ 00:03:14.485 10:16:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:14.485 10:16:09 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:14.485 10:16:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.485 10:16:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.485 10:16:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:14.743 ************************************ 00:03:14.743 START TEST no_shrink_alloc 00:03:14.743 ************************************ 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.743 10:16:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.678 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.678 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.678 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.678 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.678 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.678 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.678 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.678 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.678 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.678 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.678 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.678 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.678 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.678 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.678 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.679 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.679 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45796900 kB' 'MemAvailable: 49300676 kB' 'Buffers: 2704 kB' 'Cached: 10275868 kB' 'SwapCached: 0 kB' 'Active: 7285168 kB' 'Inactive: 3506596 kB' 'Active(anon): 6890576 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516592 kB' 'Mapped: 172080 kB' 'Shmem: 6377384 kB' 'KReclaimable: 192856 kB' 'Slab: 560184 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367328 kB' 'KernelStack: 12800 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7996316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.943 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45802756 kB' 'MemAvailable: 49306532 kB' 'Buffers: 2704 kB' 'Cached: 10275868 kB' 'SwapCached: 0 kB' 'Active: 7284752 kB' 'Inactive: 3506596 kB' 'Active(anon): 6890160 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516152 kB' 'Mapped: 172216 kB' 'Shmem: 6377384 kB' 'KReclaimable: 192856 kB' 'Slab: 560216 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367360 kB' 'KernelStack: 12752 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7996332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.944 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.945 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.946 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.947 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45802940 kB' 'MemAvailable: 49306716 kB' 'Buffers: 2704 kB' 'Cached: 10275872 kB' 'SwapCached: 0 kB' 'Active: 7284468 kB' 'Inactive: 3506596 kB' 'Active(anon): 6889876 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515848 kB' 'Mapped: 172140 kB' 'Shmem: 6377388 kB' 'KReclaimable: 192856 kB' 'Slab: 560200 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367344 kB' 'KernelStack: 12800 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7996356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.948 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.949 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.950 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.951 nr_hugepages=1024 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.951 resv_hugepages=0 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.951 surplus_hugepages=0 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.951 anon_hugepages=0 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45803552 kB' 'MemAvailable: 49307328 kB' 'Buffers: 2704 kB' 'Cached: 10275908 kB' 'SwapCached: 0 kB' 'Active: 7284724 kB' 'Inactive: 3506596 kB' 'Active(anon): 6890132 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516120 kB' 'Mapped: 172140 kB' 'Shmem: 6377424 kB' 'KReclaimable: 192856 kB' 'Slab: 560200 kB' 'SReclaimable: 192856 kB' 'SUnreclaim: 367344 kB' 'KernelStack: 12800 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7996376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.951 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.952 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.953 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21162288 kB' 'MemUsed: 11714652 kB' 'SwapCached: 0 kB' 'Active: 5421172 kB' 'Inactive: 3264144 kB' 'Active(anon): 5232600 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8395168 kB' 'Mapped: 57716 kB' 'AnonPages: 293324 kB' 'Shmem: 4942452 kB' 'KernelStack: 7416 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121484 kB' 'Slab: 312944 kB' 'SReclaimable: 121484 kB' 'SUnreclaim: 191460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.954 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.955 node0=1024 expecting 1024 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.955 10:16:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.335 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.335 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.335 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.335 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.335 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.335 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.335 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.335 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.335 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.335 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.335 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.335 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.335 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.335 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.335 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.335 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.335 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.335 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45801628 kB' 'MemAvailable: 49305356 kB' 'Buffers: 2704 kB' 'Cached: 10275980 kB' 'SwapCached: 0 kB' 'Active: 7285656 kB' 'Inactive: 3506596 kB' 'Active(anon): 6891064 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516736 kB' 'Mapped: 172292 kB' 'Shmem: 6377496 kB' 'KReclaimable: 192760 kB' 'Slab: 559968 kB' 'SReclaimable: 192760 kB' 'SUnreclaim: 367208 kB' 'KernelStack: 12800 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7996560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.335 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.336 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45802392 kB' 'MemAvailable: 49306120 kB' 'Buffers: 2704 kB' 'Cached: 10275984 kB' 'SwapCached: 0 kB' 'Active: 7285272 kB' 'Inactive: 3506596 kB' 'Active(anon): 6890680 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516396 kB' 'Mapped: 172228 kB' 'Shmem: 6377500 kB' 'KReclaimable: 192760 kB' 'Slab: 559952 kB' 'SReclaimable: 192760 kB' 'SUnreclaim: 367192 kB' 'KernelStack: 12784 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7996576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.337 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45803068 kB' 'MemAvailable: 49306796 kB' 'Buffers: 2704 kB' 'Cached: 10276004 kB' 'SwapCached: 0 kB' 'Active: 7285120 kB' 'Inactive: 3506596 kB' 'Active(anon): 6890528 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516200 kB' 'Mapped: 172152 kB' 'Shmem: 6377520 kB' 'KReclaimable: 192760 kB' 'Slab: 559944 kB' 'SReclaimable: 192760 kB' 'SUnreclaim: 367184 kB' 'KernelStack: 12816 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7996600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.338 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.339 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.340 nr_hugepages=1024 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.340 resv_hugepages=0 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.340 surplus_hugepages=0 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.340 anon_hugepages=0 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45803068 kB' 'MemAvailable: 49306796 kB' 'Buffers: 2704 kB' 'Cached: 10276024 kB' 'SwapCached: 0 kB' 'Active: 7285168 kB' 'Inactive: 3506596 kB' 'Active(anon): 6890576 kB' 'Inactive(anon): 0 kB' 'Active(file): 394592 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516200 kB' 'Mapped: 172152 kB' 'Shmem: 6377540 kB' 'KReclaimable: 192760 kB' 'Slab: 559944 kB' 'SReclaimable: 192760 kB' 'SUnreclaim: 367184 kB' 'KernelStack: 12816 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7996620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1762908 kB' 'DirectMap2M: 13885440 kB' 'DirectMap1G: 53477376 kB' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.340 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.341 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.342 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21162908 kB' 'MemUsed: 11714032 kB' 'SwapCached: 0 kB' 'Active: 5421316 kB' 'Inactive: 3264144 kB' 'Active(anon): 5232744 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8395172 kB' 'Mapped: 57720 kB' 'AnonPages: 293400 kB' 'Shmem: 4942456 kB' 'KernelStack: 7432 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121388 kB' 'Slab: 312760 kB' 'SReclaimable: 121388 kB' 'SUnreclaim: 191372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.343 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:17.344 node0=1024 expecting 1024 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:17.344 00:03:17.344 real 0m2.799s 00:03:17.344 user 0m1.188s 00:03:17.344 sys 0m1.532s 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.344 10:16:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.344 ************************************ 00:03:17.344 END TEST no_shrink_alloc 00:03:17.344 ************************************ 00:03:17.344 10:16:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:17.344 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:17.344 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:17.345 10:16:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:17.345 00:03:17.345 real 0m11.170s 00:03:17.345 user 0m4.389s 00:03:17.345 sys 0m5.690s 00:03:17.345 10:16:11 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.345 10:16:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.345 ************************************ 00:03:17.345 END TEST hugepages 00:03:17.345 ************************************ 00:03:17.345 10:16:11 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:17.345 10:16:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:17.604 10:16:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.604 10:16:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.604 10:16:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:17.604 ************************************ 00:03:17.604 START TEST driver 00:03:17.604 ************************************ 00:03:17.604 10:16:12 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:17.604 * Looking for test storage... 00:03:17.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.604 10:16:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:17.604 10:16:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.604 10:16:12 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.145 10:16:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:20.145 10:16:14 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.145 10:16:14 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.145 10:16:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:20.145 ************************************ 00:03:20.145 START TEST guess_driver 00:03:20.145 ************************************ 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:20.145 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:20.145 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:20.145 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:20.145 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:20.145 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:20.145 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:20.145 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:20.145 Looking for driver=vfio-pci 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.145 10:16:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:21.519 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.519 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.519 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.519 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.519 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.519 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.519 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.519 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.520 10:16:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.453 10:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.453 10:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.453 10:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.453 10:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:22.454 10:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:22.454 10:16:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.454 10:16:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.978 00:03:24.978 real 0m4.838s 00:03:24.978 user 0m1.108s 00:03:24.978 sys 0m1.844s 00:03:24.978 10:16:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.978 10:16:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:24.978 ************************************ 00:03:24.978 END TEST guess_driver 00:03:24.978 ************************************ 00:03:24.978 10:16:19 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:24.978 00:03:24.978 real 0m7.356s 00:03:24.978 user 0m1.664s 00:03:24.978 sys 0m2.810s 00:03:24.978 10:16:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.978 10:16:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:24.978 ************************************ 00:03:24.978 END TEST driver 00:03:24.978 ************************************ 00:03:24.978 10:16:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:24.978 10:16:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:24.978 10:16:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.978 10:16:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.978 10:16:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.978 ************************************ 00:03:24.978 START TEST devices 00:03:24.978 ************************************ 00:03:24.978 10:16:19 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:24.978 * Looking for test storage... 00:03:24.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.978 10:16:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:24.978 10:16:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:24.978 10:16:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.978 10:16:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:26.351 10:16:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:26.351 10:16:20 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:26.351 No valid GPT data, bailing 00:03:26.351 10:16:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:26.351 10:16:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:26.351 10:16:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:26.351 10:16:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:26.351 10:16:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:26.351 10:16:20 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:26.351 10:16:20 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.351 10:16:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:26.351 ************************************ 00:03:26.351 START TEST nvme_mount 00:03:26.351 ************************************ 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:26.351 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:26.352 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:26.352 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:26.352 10:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:27.726 Creating new GPT entries in memory. 00:03:27.726 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:27.726 other utilities. 00:03:27.726 10:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:27.726 10:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:27.726 10:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:27.726 10:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:27.726 10:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:28.661 Creating new GPT entries in memory. 00:03:28.661 The operation has completed successfully. 00:03:28.661 10:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:28.661 10:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:28.661 10:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2174957 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.661 10:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:29.593 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:29.852 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:29.852 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:30.110 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:30.110 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:30.110 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:30.110 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.110 10:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.486 10:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.486 10:16:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:32.864 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:32.864 00:03:32.864 real 0m6.396s 00:03:32.864 user 0m1.537s 00:03:32.864 sys 0m2.405s 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.864 10:16:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:32.864 ************************************ 00:03:32.864 END TEST nvme_mount 00:03:32.864 ************************************ 00:03:32.864 10:16:27 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:32.864 10:16:27 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:32.864 10:16:27 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.864 10:16:27 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.864 10:16:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:32.864 ************************************ 00:03:32.864 START TEST dm_mount 00:03:32.865 ************************************ 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:32.865 10:16:27 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:33.802 Creating new GPT entries in memory. 00:03:33.802 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:33.802 other utilities. 00:03:33.802 10:16:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:33.802 10:16:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.802 10:16:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.802 10:16:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.802 10:16:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:35.181 Creating new GPT entries in memory. 00:03:35.181 The operation has completed successfully. 00:03:35.181 10:16:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:35.181 10:16:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.181 10:16:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.181 10:16:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.181 10:16:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:36.119 The operation has completed successfully. 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2177345 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.119 10:16:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.055 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.312 10:16:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.245 10:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:38.502 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:38.502 00:03:38.502 real 0m5.690s 00:03:38.502 user 0m0.985s 00:03:38.502 sys 0m1.561s 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.502 10:16:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:38.503 ************************************ 00:03:38.503 END TEST dm_mount 00:03:38.503 ************************************ 00:03:38.503 10:16:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:38.503 10:16:33 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:38.503 10:16:33 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:38.503 10:16:33 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.503 10:16:33 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.503 10:16:33 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:38.503 10:16:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.503 10:16:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:38.772 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:38.772 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:38.772 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:38.772 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:38.772 10:16:33 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:38.772 10:16:33 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:38.772 10:16:33 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.772 10:16:33 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.772 10:16:33 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:38.772 10:16:33 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.772 10:16:33 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:38.772 00:03:38.772 real 0m13.981s 00:03:38.772 user 0m3.174s 00:03:38.772 sys 0m4.975s 00:03:38.772 10:16:33 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.772 10:16:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:38.772 ************************************ 00:03:38.772 END TEST devices 00:03:38.772 ************************************ 00:03:38.772 10:16:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:38.772 00:03:38.772 real 0m43.043s 00:03:38.772 user 0m12.563s 00:03:38.772 sys 0m18.718s 00:03:38.772 10:16:33 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.772 10:16:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.772 ************************************ 00:03:38.772 END TEST setup.sh 00:03:38.772 ************************************ 00:03:39.033 10:16:33 -- common/autotest_common.sh@1142 -- # return 0 00:03:39.033 10:16:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:39.965 Hugepages 00:03:39.965 node hugesize free / total 00:03:39.965 node0 1048576kB 0 / 0 00:03:39.965 node0 2048kB 2048 / 2048 00:03:39.965 node1 1048576kB 0 / 0 00:03:39.965 node1 2048kB 0 / 0 00:03:39.965 00:03:39.965 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:39.965 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:39.965 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:39.965 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:39.965 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:39.965 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:39.965 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:39.965 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:39.965 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:39.965 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:39.965 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:39.965 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:39.965 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:39.965 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:39.965 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:39.965 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:39.965 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:39.965 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:39.965 10:16:34 -- spdk/autotest.sh@130 -- # uname -s 00:03:39.965 10:16:34 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:39.965 10:16:34 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:39.965 10:16:34 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.339 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:41.339 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:41.339 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:41.339 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:41.339 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:41.339 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:41.339 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:41.339 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:41.339 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:41.339 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:41.339 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:41.339 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:41.339 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:41.339 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:41.339 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:41.339 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:42.276 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.276 10:16:36 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:43.210 10:16:37 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:43.210 10:16:37 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:43.210 10:16:37 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:43.210 10:16:37 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:43.210 10:16:37 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:43.210 10:16:37 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:43.210 10:16:37 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.468 10:16:37 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:43.468 10:16:37 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:43.468 10:16:37 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:43.468 10:16:37 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:43.468 10:16:37 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.401 Waiting for block devices as requested 00:03:44.401 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:44.659 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:44.659 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:44.659 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:44.918 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:44.918 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:44.918 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:44.918 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:45.177 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:45.177 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:45.177 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:45.177 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:45.437 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:45.437 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:45.437 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:45.695 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:45.695 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:45.695 10:16:40 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:45.695 10:16:40 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:45.695 10:16:40 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:45.695 10:16:40 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:45.695 10:16:40 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:45.695 10:16:40 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:45.695 10:16:40 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:45.695 10:16:40 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:45.695 10:16:40 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:45.695 10:16:40 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:45.695 10:16:40 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:45.695 10:16:40 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:45.695 10:16:40 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:45.695 10:16:40 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:45.695 10:16:40 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:45.695 10:16:40 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:45.695 10:16:40 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:45.695 10:16:40 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:45.695 10:16:40 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:45.695 10:16:40 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:45.695 10:16:40 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:45.695 10:16:40 -- common/autotest_common.sh@1557 -- # continue 00:03:45.695 10:16:40 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:45.695 10:16:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:45.695 10:16:40 -- common/autotest_common.sh@10 -- # set +x 00:03:45.695 10:16:40 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:45.695 10:16:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.695 10:16:40 -- common/autotest_common.sh@10 -- # set +x 00:03:45.695 10:16:40 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.072 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:47.072 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:47.072 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:47.072 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:47.072 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:47.072 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:47.072 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:47.072 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:47.072 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:47.072 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:47.072 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:47.072 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:47.072 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:47.072 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:47.072 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:47.072 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:48.008 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.267 10:16:42 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:48.267 10:16:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:48.267 10:16:42 -- common/autotest_common.sh@10 -- # set +x 00:03:48.267 10:16:42 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:48.267 10:16:42 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:48.267 10:16:42 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:48.267 10:16:42 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:48.267 10:16:42 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:48.267 10:16:42 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:48.267 10:16:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:48.267 10:16:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:48.267 10:16:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.267 10:16:42 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:48.267 10:16:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:48.267 10:16:42 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:48.267 10:16:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:48.267 10:16:42 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:48.267 10:16:42 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:48.267 10:16:42 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:48.267 10:16:42 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:48.267 10:16:42 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:48.267 10:16:42 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:48.267 10:16:42 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:48.267 10:16:42 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2182523 00:03:48.267 10:16:42 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.267 10:16:42 -- common/autotest_common.sh@1598 -- # waitforlisten 2182523 00:03:48.267 10:16:42 -- common/autotest_common.sh@829 -- # '[' -z 2182523 ']' 00:03:48.267 10:16:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.267 10:16:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:48.267 10:16:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.267 10:16:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:48.267 10:16:42 -- common/autotest_common.sh@10 -- # set +x 00:03:48.267 [2024-07-15 10:16:42.887020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:03:48.267 [2024-07-15 10:16:42.887110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182523 ] 00:03:48.267 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.525 [2024-07-15 10:16:42.945551] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.525 [2024-07-15 10:16:43.054940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.784 10:16:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:48.784 10:16:43 -- common/autotest_common.sh@862 -- # return 0 00:03:48.784 10:16:43 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:48.784 10:16:43 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:48.784 10:16:43 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:52.065 nvme0n1 00:03:52.065 10:16:46 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:52.065 [2024-07-15 10:16:46.645125] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:52.065 [2024-07-15 10:16:46.645185] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:52.065 request: 00:03:52.065 { 00:03:52.065 "nvme_ctrlr_name": "nvme0", 00:03:52.065 "password": "test", 00:03:52.065 "method": "bdev_nvme_opal_revert", 00:03:52.065 "req_id": 1 00:03:52.065 } 00:03:52.065 Got JSON-RPC error response 00:03:52.065 response: 00:03:52.065 { 00:03:52.065 "code": -32603, 00:03:52.065 "message": "Internal error" 00:03:52.065 } 00:03:52.065 10:16:46 -- common/autotest_common.sh@1604 -- # true 00:03:52.065 10:16:46 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:52.065 10:16:46 -- common/autotest_common.sh@1608 -- # killprocess 2182523 00:03:52.065 10:16:46 -- common/autotest_common.sh@948 -- # '[' -z 2182523 ']' 00:03:52.065 10:16:46 -- common/autotest_common.sh@952 -- # kill -0 2182523 00:03:52.065 10:16:46 -- common/autotest_common.sh@953 -- # uname 00:03:52.065 10:16:46 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:52.065 10:16:46 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2182523 00:03:52.065 10:16:46 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:52.065 10:16:46 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:52.065 10:16:46 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2182523' 00:03:52.065 killing process with pid 2182523 00:03:52.065 10:16:46 -- common/autotest_common.sh@967 -- # kill 2182523 00:03:52.065 10:16:46 -- common/autotest_common.sh@972 -- # wait 2182523 00:03:53.964 10:16:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:53.964 10:16:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:53.964 10:16:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:53.964 10:16:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:53.964 10:16:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:53.964 10:16:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.964 10:16:48 -- common/autotest_common.sh@10 -- # set +x 00:03:53.964 10:16:48 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:53.964 10:16:48 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:53.964 10:16:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.964 10:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.964 10:16:48 -- common/autotest_common.sh@10 -- # set +x 00:03:53.964 ************************************ 00:03:53.964 START TEST env 00:03:53.964 ************************************ 00:03:53.964 10:16:48 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:53.964 * Looking for test storage... 00:03:54.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:54.221 10:16:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:54.221 10:16:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.221 10:16:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.221 10:16:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.221 ************************************ 00:03:54.221 START TEST env_memory 00:03:54.221 ************************************ 00:03:54.221 10:16:48 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:54.221 00:03:54.221 00:03:54.221 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.221 http://cunit.sourceforge.net/ 00:03:54.221 00:03:54.221 00:03:54.221 Suite: memory 00:03:54.221 Test: alloc and free memory map ...[2024-07-15 10:16:48.676047] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:54.221 passed 00:03:54.221 Test: mem map translation ...[2024-07-15 10:16:48.695938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:54.221 [2024-07-15 10:16:48.695958] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:54.221 [2024-07-15 10:16:48.696014] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:54.221 [2024-07-15 10:16:48.696026] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:54.221 passed 00:03:54.221 Test: mem map registration ...[2024-07-15 10:16:48.736502] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:54.221 [2024-07-15 10:16:48.736520] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:54.221 passed 00:03:54.221 Test: mem map adjacent registrations ...passed 00:03:54.221 00:03:54.221 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.221 suites 1 1 n/a 0 0 00:03:54.221 tests 4 4 4 0 0 00:03:54.221 asserts 152 152 152 0 n/a 00:03:54.221 00:03:54.221 Elapsed time = 0.140 seconds 00:03:54.221 00:03:54.221 real 0m0.149s 00:03:54.221 user 0m0.141s 00:03:54.221 sys 0m0.007s 00:03:54.221 10:16:48 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.221 10:16:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:54.221 ************************************ 00:03:54.221 END TEST env_memory 00:03:54.221 ************************************ 00:03:54.221 10:16:48 env -- common/autotest_common.sh@1142 -- # return 0 00:03:54.221 10:16:48 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:54.221 10:16:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.221 10:16:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.221 10:16:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.221 ************************************ 00:03:54.221 START TEST env_vtophys 00:03:54.221 ************************************ 00:03:54.221 10:16:48 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:54.221 EAL: lib.eal log level changed from notice to debug 00:03:54.221 EAL: Detected lcore 0 as core 0 on socket 0 00:03:54.221 EAL: Detected lcore 1 as core 1 on socket 0 00:03:54.221 EAL: Detected lcore 2 as core 2 on socket 0 00:03:54.221 EAL: Detected lcore 3 as core 3 on socket 0 00:03:54.221 EAL: Detected lcore 4 as core 4 on socket 0 00:03:54.221 EAL: Detected lcore 5 as core 5 on socket 0 00:03:54.221 EAL: Detected lcore 6 as core 8 on socket 0 00:03:54.221 EAL: Detected lcore 7 as core 9 on socket 0 00:03:54.221 EAL: Detected lcore 8 as core 10 on socket 0 00:03:54.221 EAL: Detected lcore 9 as core 11 on socket 0 00:03:54.221 EAL: Detected lcore 10 as core 12 on socket 0 00:03:54.221 EAL: Detected lcore 11 as core 13 on socket 0 00:03:54.221 EAL: Detected lcore 12 as core 0 on socket 1 00:03:54.221 EAL: Detected lcore 13 as core 1 on socket 1 00:03:54.221 EAL: Detected lcore 14 as core 2 on socket 1 00:03:54.221 EAL: Detected lcore 15 as core 3 on socket 1 00:03:54.221 EAL: Detected lcore 16 as core 4 on socket 1 00:03:54.221 EAL: Detected lcore 17 as core 5 on socket 1 00:03:54.221 EAL: Detected lcore 18 as core 8 on socket 1 00:03:54.221 EAL: Detected lcore 19 as core 9 on socket 1 00:03:54.221 EAL: Detected lcore 20 as core 10 on socket 1 00:03:54.221 EAL: Detected lcore 21 as core 11 on socket 1 00:03:54.221 EAL: Detected lcore 22 as core 12 on socket 1 00:03:54.221 EAL: Detected lcore 23 as core 13 on socket 1 00:03:54.221 EAL: Detected lcore 24 as core 0 on socket 0 00:03:54.221 EAL: Detected lcore 25 as core 1 on socket 0 00:03:54.221 EAL: Detected lcore 26 as core 2 on socket 0 00:03:54.221 EAL: Detected lcore 27 as core 3 on socket 0 00:03:54.221 EAL: Detected lcore 28 as core 4 on socket 0 00:03:54.221 EAL: Detected lcore 29 as core 5 on socket 0 00:03:54.221 EAL: Detected lcore 30 as core 8 on socket 0 00:03:54.221 EAL: Detected lcore 31 as core 9 on socket 0 00:03:54.221 EAL: Detected lcore 32 as core 10 on socket 0 00:03:54.221 EAL: Detected lcore 33 as core 11 on socket 0 00:03:54.221 EAL: Detected lcore 34 as core 12 on socket 0 00:03:54.221 EAL: Detected lcore 35 as core 13 on socket 0 00:03:54.221 EAL: Detected lcore 36 as core 0 on socket 1 00:03:54.221 EAL: Detected lcore 37 as core 1 on socket 1 00:03:54.221 EAL: Detected lcore 38 as core 2 on socket 1 00:03:54.221 EAL: Detected lcore 39 as core 3 on socket 1 00:03:54.221 EAL: Detected lcore 40 as core 4 on socket 1 00:03:54.221 EAL: Detected lcore 41 as core 5 on socket 1 00:03:54.221 EAL: Detected lcore 42 as core 8 on socket 1 00:03:54.221 EAL: Detected lcore 43 as core 9 on socket 1 00:03:54.221 EAL: Detected lcore 44 as core 10 on socket 1 00:03:54.221 EAL: Detected lcore 45 as core 11 on socket 1 00:03:54.221 EAL: Detected lcore 46 as core 12 on socket 1 00:03:54.221 EAL: Detected lcore 47 as core 13 on socket 1 00:03:54.221 EAL: Maximum logical cores by configuration: 128 00:03:54.221 EAL: Detected CPU lcores: 48 00:03:54.221 EAL: Detected NUMA nodes: 2 00:03:54.221 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:54.221 EAL: Detected shared linkage of DPDK 00:03:54.221 EAL: No shared files mode enabled, IPC will be disabled 00:03:54.479 EAL: Bus pci wants IOVA as 'DC' 00:03:54.479 EAL: Buses did not request a specific IOVA mode. 00:03:54.479 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:54.479 EAL: Selected IOVA mode 'VA' 00:03:54.479 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.479 EAL: Probing VFIO support... 00:03:54.479 EAL: IOMMU type 1 (Type 1) is supported 00:03:54.479 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:54.479 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:54.479 EAL: VFIO support initialized 00:03:54.479 EAL: Ask a virtual area of 0x2e000 bytes 00:03:54.479 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:54.479 EAL: Setting up physically contiguous memory... 00:03:54.479 EAL: Setting maximum number of open files to 524288 00:03:54.479 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:54.479 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:54.479 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:54.479 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.479 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:54.479 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.479 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.479 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:54.479 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:54.479 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.479 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:54.479 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.479 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.479 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:54.479 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:54.479 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.479 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:54.479 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.479 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.479 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:54.479 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:54.479 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.479 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:54.479 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.479 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.479 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:54.479 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:54.479 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:54.479 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.479 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:54.479 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:54.479 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.479 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:54.479 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:54.479 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.479 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:54.479 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:54.479 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.479 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:54.479 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:54.479 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.479 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:54.479 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:54.479 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.479 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:54.479 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:54.479 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.479 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:54.479 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:54.479 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.479 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:54.479 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:54.479 EAL: Hugepages will be freed exactly as allocated. 00:03:54.479 EAL: No shared files mode enabled, IPC is disabled 00:03:54.479 EAL: No shared files mode enabled, IPC is disabled 00:03:54.479 EAL: TSC frequency is ~2700000 KHz 00:03:54.479 EAL: Main lcore 0 is ready (tid=7fed6678ea00;cpuset=[0]) 00:03:54.479 EAL: Trying to obtain current memory policy. 00:03:54.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.479 EAL: Restoring previous memory policy: 0 00:03:54.479 EAL: request: mp_malloc_sync 00:03:54.479 EAL: No shared files mode enabled, IPC is disabled 00:03:54.479 EAL: Heap on socket 0 was expanded by 2MB 00:03:54.479 EAL: No shared files mode enabled, IPC is disabled 00:03:54.479 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:54.479 EAL: Mem event callback 'spdk:(nil)' registered 00:03:54.479 00:03:54.479 00:03:54.479 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.479 http://cunit.sourceforge.net/ 00:03:54.479 00:03:54.479 00:03:54.479 Suite: components_suite 00:03:54.479 Test: vtophys_malloc_test ...passed 00:03:54.479 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:54.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.479 EAL: Restoring previous memory policy: 4 00:03:54.479 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.479 EAL: request: mp_malloc_sync 00:03:54.479 EAL: No shared files mode enabled, IPC is disabled 00:03:54.479 EAL: Heap on socket 0 was expanded by 4MB 00:03:54.479 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.479 EAL: request: mp_malloc_sync 00:03:54.479 EAL: No shared files mode enabled, IPC is disabled 00:03:54.479 EAL: Heap on socket 0 was shrunk by 4MB 00:03:54.479 EAL: Trying to obtain current memory policy. 00:03:54.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.479 EAL: Restoring previous memory policy: 4 00:03:54.479 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.479 EAL: request: mp_malloc_sync 00:03:54.479 EAL: No shared files mode enabled, IPC is disabled 00:03:54.479 EAL: Heap on socket 0 was expanded by 6MB 00:03:54.479 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.479 EAL: request: mp_malloc_sync 00:03:54.479 EAL: No shared files mode enabled, IPC is disabled 00:03:54.479 EAL: Heap on socket 0 was shrunk by 6MB 00:03:54.479 EAL: Trying to obtain current memory policy. 00:03:54.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.479 EAL: Restoring previous memory policy: 4 00:03:54.479 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.479 EAL: request: mp_malloc_sync 00:03:54.479 EAL: No shared files mode enabled, IPC is disabled 00:03:54.479 EAL: Heap on socket 0 was expanded by 10MB 00:03:54.479 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.479 EAL: request: mp_malloc_sync 00:03:54.479 EAL: No shared files mode enabled, IPC is disabled 00:03:54.479 EAL: Heap on socket 0 was shrunk by 10MB 00:03:54.479 EAL: Trying to obtain current memory policy. 00:03:54.479 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.479 EAL: Restoring previous memory policy: 4 00:03:54.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.480 EAL: request: mp_malloc_sync 00:03:54.480 EAL: No shared files mode enabled, IPC is disabled 00:03:54.480 EAL: Heap on socket 0 was expanded by 18MB 00:03:54.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.480 EAL: request: mp_malloc_sync 00:03:54.480 EAL: No shared files mode enabled, IPC is disabled 00:03:54.480 EAL: Heap on socket 0 was shrunk by 18MB 00:03:54.480 EAL: Trying to obtain current memory policy. 00:03:54.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.480 EAL: Restoring previous memory policy: 4 00:03:54.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.480 EAL: request: mp_malloc_sync 00:03:54.480 EAL: No shared files mode enabled, IPC is disabled 00:03:54.480 EAL: Heap on socket 0 was expanded by 34MB 00:03:54.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.480 EAL: request: mp_malloc_sync 00:03:54.480 EAL: No shared files mode enabled, IPC is disabled 00:03:54.480 EAL: Heap on socket 0 was shrunk by 34MB 00:03:54.480 EAL: Trying to obtain current memory policy. 00:03:54.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.480 EAL: Restoring previous memory policy: 4 00:03:54.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.480 EAL: request: mp_malloc_sync 00:03:54.480 EAL: No shared files mode enabled, IPC is disabled 00:03:54.480 EAL: Heap on socket 0 was expanded by 66MB 00:03:54.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.480 EAL: request: mp_malloc_sync 00:03:54.480 EAL: No shared files mode enabled, IPC is disabled 00:03:54.480 EAL: Heap on socket 0 was shrunk by 66MB 00:03:54.480 EAL: Trying to obtain current memory policy. 00:03:54.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.480 EAL: Restoring previous memory policy: 4 00:03:54.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.480 EAL: request: mp_malloc_sync 00:03:54.480 EAL: No shared files mode enabled, IPC is disabled 00:03:54.480 EAL: Heap on socket 0 was expanded by 130MB 00:03:54.480 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.480 EAL: request: mp_malloc_sync 00:03:54.480 EAL: No shared files mode enabled, IPC is disabled 00:03:54.480 EAL: Heap on socket 0 was shrunk by 130MB 00:03:54.480 EAL: Trying to obtain current memory policy. 00:03:54.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.737 EAL: Restoring previous memory policy: 4 00:03:54.737 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.737 EAL: request: mp_malloc_sync 00:03:54.737 EAL: No shared files mode enabled, IPC is disabled 00:03:54.737 EAL: Heap on socket 0 was expanded by 258MB 00:03:54.737 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.737 EAL: request: mp_malloc_sync 00:03:54.737 EAL: No shared files mode enabled, IPC is disabled 00:03:54.737 EAL: Heap on socket 0 was shrunk by 258MB 00:03:54.737 EAL: Trying to obtain current memory policy. 00:03:54.737 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.737 EAL: Restoring previous memory policy: 4 00:03:54.737 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.737 EAL: request: mp_malloc_sync 00:03:54.737 EAL: No shared files mode enabled, IPC is disabled 00:03:54.737 EAL: Heap on socket 0 was expanded by 514MB 00:03:54.995 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.995 EAL: request: mp_malloc_sync 00:03:54.995 EAL: No shared files mode enabled, IPC is disabled 00:03:54.995 EAL: Heap on socket 0 was shrunk by 514MB 00:03:54.995 EAL: Trying to obtain current memory policy. 00:03:54.995 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.253 EAL: Restoring previous memory policy: 4 00:03:55.253 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.253 EAL: request: mp_malloc_sync 00:03:55.253 EAL: No shared files mode enabled, IPC is disabled 00:03:55.253 EAL: Heap on socket 0 was expanded by 1026MB 00:03:55.512 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.808 EAL: request: mp_malloc_sync 00:03:55.808 EAL: No shared files mode enabled, IPC is disabled 00:03:55.808 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:55.808 passed 00:03:55.808 00:03:55.808 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.808 suites 1 1 n/a 0 0 00:03:55.808 tests 2 2 2 0 0 00:03:55.808 asserts 497 497 497 0 n/a 00:03:55.808 00:03:55.808 Elapsed time = 1.385 seconds 00:03:55.808 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.808 EAL: request: mp_malloc_sync 00:03:55.808 EAL: No shared files mode enabled, IPC is disabled 00:03:55.808 EAL: Heap on socket 0 was shrunk by 2MB 00:03:55.808 EAL: No shared files mode enabled, IPC is disabled 00:03:55.808 EAL: No shared files mode enabled, IPC is disabled 00:03:55.808 EAL: No shared files mode enabled, IPC is disabled 00:03:55.808 00:03:55.808 real 0m1.499s 00:03:55.808 user 0m0.882s 00:03:55.808 sys 0m0.585s 00:03:55.808 10:16:50 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.808 10:16:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:55.808 ************************************ 00:03:55.808 END TEST env_vtophys 00:03:55.808 ************************************ 00:03:55.808 10:16:50 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.808 10:16:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:55.808 10:16:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.808 10:16:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.808 10:16:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.808 ************************************ 00:03:55.808 START TEST env_pci 00:03:55.808 ************************************ 00:03:55.808 10:16:50 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:55.808 00:03:55.808 00:03:55.808 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.808 http://cunit.sourceforge.net/ 00:03:55.808 00:03:55.808 00:03:55.808 Suite: pci 00:03:55.808 Test: pci_hook ...[2024-07-15 10:16:50.385192] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2183410 has claimed it 00:03:55.808 EAL: Cannot find device (10000:00:01.0) 00:03:55.808 EAL: Failed to attach device on primary process 00:03:55.808 passed 00:03:55.808 00:03:55.808 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.808 suites 1 1 n/a 0 0 00:03:55.808 tests 1 1 1 0 0 00:03:55.808 asserts 25 25 25 0 n/a 00:03:55.808 00:03:55.808 Elapsed time = 0.021 seconds 00:03:55.808 00:03:55.808 real 0m0.034s 00:03:55.808 user 0m0.009s 00:03:55.808 sys 0m0.025s 00:03:55.808 10:16:50 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.808 10:16:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:55.808 ************************************ 00:03:55.808 END TEST env_pci 00:03:55.808 ************************************ 00:03:55.808 10:16:50 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.808 10:16:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:55.808 10:16:50 env -- env/env.sh@15 -- # uname 00:03:56.067 10:16:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:56.067 10:16:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:56.067 10:16:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:56.067 10:16:50 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:56.067 10:16:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.067 10:16:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.067 ************************************ 00:03:56.067 START TEST env_dpdk_post_init 00:03:56.067 ************************************ 00:03:56.067 10:16:50 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:56.067 EAL: Detected CPU lcores: 48 00:03:56.067 EAL: Detected NUMA nodes: 2 00:03:56.067 EAL: Detected shared linkage of DPDK 00:03:56.067 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:56.067 EAL: Selected IOVA mode 'VA' 00:03:56.067 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.067 EAL: VFIO support initialized 00:03:56.067 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:56.067 EAL: Using IOMMU type 1 (Type 1) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:56.067 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:56.325 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:56.325 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:56.325 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:56.325 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:56.888 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:00.158 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:00.158 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:00.414 Starting DPDK initialization... 00:04:00.414 Starting SPDK post initialization... 00:04:00.414 SPDK NVMe probe 00:04:00.414 Attaching to 0000:88:00.0 00:04:00.414 Attached to 0000:88:00.0 00:04:00.414 Cleaning up... 00:04:00.414 00:04:00.414 real 0m4.409s 00:04:00.414 user 0m3.283s 00:04:00.414 sys 0m0.180s 00:04:00.414 10:16:54 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.414 10:16:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:00.414 ************************************ 00:04:00.414 END TEST env_dpdk_post_init 00:04:00.414 ************************************ 00:04:00.415 10:16:54 env -- common/autotest_common.sh@1142 -- # return 0 00:04:00.415 10:16:54 env -- env/env.sh@26 -- # uname 00:04:00.415 10:16:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:00.415 10:16:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.415 10:16:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.415 10:16:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.415 10:16:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.415 ************************************ 00:04:00.415 START TEST env_mem_callbacks 00:04:00.415 ************************************ 00:04:00.415 10:16:54 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.415 EAL: Detected CPU lcores: 48 00:04:00.415 EAL: Detected NUMA nodes: 2 00:04:00.415 EAL: Detected shared linkage of DPDK 00:04:00.415 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.415 EAL: Selected IOVA mode 'VA' 00:04:00.415 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.415 EAL: VFIO support initialized 00:04:00.415 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.415 00:04:00.415 00:04:00.415 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.415 http://cunit.sourceforge.net/ 00:04:00.415 00:04:00.415 00:04:00.415 Suite: memory 00:04:00.415 Test: test ... 00:04:00.415 register 0x200000200000 2097152 00:04:00.415 malloc 3145728 00:04:00.415 register 0x200000400000 4194304 00:04:00.415 buf 0x200000500000 len 3145728 PASSED 00:04:00.415 malloc 64 00:04:00.415 buf 0x2000004fff40 len 64 PASSED 00:04:00.415 malloc 4194304 00:04:00.415 register 0x200000800000 6291456 00:04:00.415 buf 0x200000a00000 len 4194304 PASSED 00:04:00.415 free 0x200000500000 3145728 00:04:00.415 free 0x2000004fff40 64 00:04:00.415 unregister 0x200000400000 4194304 PASSED 00:04:00.415 free 0x200000a00000 4194304 00:04:00.415 unregister 0x200000800000 6291456 PASSED 00:04:00.415 malloc 8388608 00:04:00.415 register 0x200000400000 10485760 00:04:00.415 buf 0x200000600000 len 8388608 PASSED 00:04:00.415 free 0x200000600000 8388608 00:04:00.415 unregister 0x200000400000 10485760 PASSED 00:04:00.415 passed 00:04:00.415 00:04:00.415 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.415 suites 1 1 n/a 0 0 00:04:00.415 tests 1 1 1 0 0 00:04:00.415 asserts 15 15 15 0 n/a 00:04:00.415 00:04:00.415 Elapsed time = 0.005 seconds 00:04:00.415 00:04:00.415 real 0m0.047s 00:04:00.415 user 0m0.016s 00:04:00.415 sys 0m0.031s 00:04:00.415 10:16:54 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.415 10:16:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:00.415 ************************************ 00:04:00.415 END TEST env_mem_callbacks 00:04:00.415 ************************************ 00:04:00.415 10:16:54 env -- common/autotest_common.sh@1142 -- # return 0 00:04:00.415 00:04:00.415 real 0m6.419s 00:04:00.415 user 0m4.430s 00:04:00.415 sys 0m1.029s 00:04:00.415 10:16:54 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.415 10:16:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.415 ************************************ 00:04:00.415 END TEST env 00:04:00.415 ************************************ 00:04:00.415 10:16:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:00.415 10:16:55 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:00.415 10:16:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.415 10:16:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.415 10:16:55 -- common/autotest_common.sh@10 -- # set +x 00:04:00.415 ************************************ 00:04:00.415 START TEST rpc 00:04:00.415 ************************************ 00:04:00.415 10:16:55 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:00.672 * Looking for test storage... 00:04:00.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.672 10:16:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2184186 00:04:00.672 10:16:55 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:00.672 10:16:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.672 10:16:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2184186 00:04:00.672 10:16:55 rpc -- common/autotest_common.sh@829 -- # '[' -z 2184186 ']' 00:04:00.673 10:16:55 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.673 10:16:55 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:00.673 10:16:55 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.673 10:16:55 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:00.673 10:16:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.673 [2024-07-15 10:16:55.130673] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:00.673 [2024-07-15 10:16:55.130772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184186 ] 00:04:00.673 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.673 [2024-07-15 10:16:55.187035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.673 [2024-07-15 10:16:55.291749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:00.673 [2024-07-15 10:16:55.291807] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2184186' to capture a snapshot of events at runtime. 00:04:00.673 [2024-07-15 10:16:55.291834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:00.673 [2024-07-15 10:16:55.291845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:00.673 [2024-07-15 10:16:55.291855] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2184186 for offline analysis/debug. 00:04:00.673 [2024-07-15 10:16:55.291904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.931 10:16:55 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:00.931 10:16:55 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:00.931 10:16:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.931 10:16:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.931 10:16:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:00.931 10:16:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:00.931 10:16:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.931 10:16:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.931 10:16:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.189 ************************************ 00:04:01.189 START TEST rpc_integrity 00:04:01.189 ************************************ 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.189 { 00:04:01.189 "name": "Malloc0", 00:04:01.189 "aliases": [ 00:04:01.189 "74334706-9718-4968-8336-6aca4d5ac988" 00:04:01.189 ], 00:04:01.189 "product_name": "Malloc disk", 00:04:01.189 "block_size": 512, 00:04:01.189 "num_blocks": 16384, 00:04:01.189 "uuid": "74334706-9718-4968-8336-6aca4d5ac988", 00:04:01.189 "assigned_rate_limits": { 00:04:01.189 "rw_ios_per_sec": 0, 00:04:01.189 "rw_mbytes_per_sec": 0, 00:04:01.189 "r_mbytes_per_sec": 0, 00:04:01.189 "w_mbytes_per_sec": 0 00:04:01.189 }, 00:04:01.189 "claimed": false, 00:04:01.189 "zoned": false, 00:04:01.189 "supported_io_types": { 00:04:01.189 "read": true, 00:04:01.189 "write": true, 00:04:01.189 "unmap": true, 00:04:01.189 "flush": true, 00:04:01.189 "reset": true, 00:04:01.189 "nvme_admin": false, 00:04:01.189 "nvme_io": false, 00:04:01.189 "nvme_io_md": false, 00:04:01.189 "write_zeroes": true, 00:04:01.189 "zcopy": true, 00:04:01.189 "get_zone_info": false, 00:04:01.189 "zone_management": false, 00:04:01.189 "zone_append": false, 00:04:01.189 "compare": false, 00:04:01.189 "compare_and_write": false, 00:04:01.189 "abort": true, 00:04:01.189 "seek_hole": false, 00:04:01.189 "seek_data": false, 00:04:01.189 "copy": true, 00:04:01.189 "nvme_iov_md": false 00:04:01.189 }, 00:04:01.189 "memory_domains": [ 00:04:01.189 { 00:04:01.189 "dma_device_id": "system", 00:04:01.189 "dma_device_type": 1 00:04:01.189 }, 00:04:01.189 { 00:04:01.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.189 "dma_device_type": 2 00:04:01.189 } 00:04:01.189 ], 00:04:01.189 "driver_specific": {} 00:04:01.189 } 00:04:01.189 ]' 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.189 [2024-07-15 10:16:55.690973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:01.189 [2024-07-15 10:16:55.691026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.189 [2024-07-15 10:16:55.691047] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b45d50 00:04:01.189 [2024-07-15 10:16:55.691061] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.189 [2024-07-15 10:16:55.692565] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.189 [2024-07-15 10:16:55.692592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.189 Passthru0 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.189 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.189 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.189 { 00:04:01.189 "name": "Malloc0", 00:04:01.189 "aliases": [ 00:04:01.189 "74334706-9718-4968-8336-6aca4d5ac988" 00:04:01.189 ], 00:04:01.189 "product_name": "Malloc disk", 00:04:01.189 "block_size": 512, 00:04:01.189 "num_blocks": 16384, 00:04:01.189 "uuid": "74334706-9718-4968-8336-6aca4d5ac988", 00:04:01.189 "assigned_rate_limits": { 00:04:01.189 "rw_ios_per_sec": 0, 00:04:01.189 "rw_mbytes_per_sec": 0, 00:04:01.189 "r_mbytes_per_sec": 0, 00:04:01.189 "w_mbytes_per_sec": 0 00:04:01.189 }, 00:04:01.189 "claimed": true, 00:04:01.189 "claim_type": "exclusive_write", 00:04:01.189 "zoned": false, 00:04:01.189 "supported_io_types": { 00:04:01.189 "read": true, 00:04:01.189 "write": true, 00:04:01.189 "unmap": true, 00:04:01.189 "flush": true, 00:04:01.189 "reset": true, 00:04:01.189 "nvme_admin": false, 00:04:01.189 "nvme_io": false, 00:04:01.189 "nvme_io_md": false, 00:04:01.189 "write_zeroes": true, 00:04:01.189 "zcopy": true, 00:04:01.189 "get_zone_info": false, 00:04:01.189 "zone_management": false, 00:04:01.189 "zone_append": false, 00:04:01.189 "compare": false, 00:04:01.189 "compare_and_write": false, 00:04:01.189 "abort": true, 00:04:01.189 "seek_hole": false, 00:04:01.189 "seek_data": false, 00:04:01.189 "copy": true, 00:04:01.189 "nvme_iov_md": false 00:04:01.189 }, 00:04:01.189 "memory_domains": [ 00:04:01.189 { 00:04:01.189 "dma_device_id": "system", 00:04:01.189 "dma_device_type": 1 00:04:01.189 }, 00:04:01.189 { 00:04:01.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.189 "dma_device_type": 2 00:04:01.189 } 00:04:01.189 ], 00:04:01.189 "driver_specific": {} 00:04:01.189 }, 00:04:01.189 { 00:04:01.189 "name": "Passthru0", 00:04:01.189 "aliases": [ 00:04:01.189 "303a814e-e039-59e9-96d1-ef9b0682b381" 00:04:01.189 ], 00:04:01.189 "product_name": "passthru", 00:04:01.189 "block_size": 512, 00:04:01.189 "num_blocks": 16384, 00:04:01.189 "uuid": "303a814e-e039-59e9-96d1-ef9b0682b381", 00:04:01.189 "assigned_rate_limits": { 00:04:01.189 "rw_ios_per_sec": 0, 00:04:01.189 "rw_mbytes_per_sec": 0, 00:04:01.189 "r_mbytes_per_sec": 0, 00:04:01.189 "w_mbytes_per_sec": 0 00:04:01.189 }, 00:04:01.189 "claimed": false, 00:04:01.189 "zoned": false, 00:04:01.189 "supported_io_types": { 00:04:01.189 "read": true, 00:04:01.189 "write": true, 00:04:01.190 "unmap": true, 00:04:01.190 "flush": true, 00:04:01.190 "reset": true, 00:04:01.190 "nvme_admin": false, 00:04:01.190 "nvme_io": false, 00:04:01.190 "nvme_io_md": false, 00:04:01.190 "write_zeroes": true, 00:04:01.190 "zcopy": true, 00:04:01.190 "get_zone_info": false, 00:04:01.190 "zone_management": false, 00:04:01.190 "zone_append": false, 00:04:01.190 "compare": false, 00:04:01.190 "compare_and_write": false, 00:04:01.190 "abort": true, 00:04:01.190 "seek_hole": false, 00:04:01.190 "seek_data": false, 00:04:01.190 "copy": true, 00:04:01.190 "nvme_iov_md": false 00:04:01.190 }, 00:04:01.190 "memory_domains": [ 00:04:01.190 { 00:04:01.190 "dma_device_id": "system", 00:04:01.190 "dma_device_type": 1 00:04:01.190 }, 00:04:01.190 { 00:04:01.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.190 "dma_device_type": 2 00:04:01.190 } 00:04:01.190 ], 00:04:01.190 "driver_specific": { 00:04:01.190 "passthru": { 00:04:01.190 "name": "Passthru0", 00:04:01.190 "base_bdev_name": "Malloc0" 00:04:01.190 } 00:04:01.190 } 00:04:01.190 } 00:04:01.190 ]' 00:04:01.190 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:01.190 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.190 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.190 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.190 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.190 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.190 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:01.190 10:16:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.190 00:04:01.190 real 0m0.225s 00:04:01.190 user 0m0.152s 00:04:01.190 sys 0m0.020s 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.190 10:16:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.190 ************************************ 00:04:01.190 END TEST rpc_integrity 00:04:01.190 ************************************ 00:04:01.190 10:16:55 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.190 10:16:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:01.190 10:16:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.190 10:16:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.190 10:16:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.447 ************************************ 00:04:01.447 START TEST rpc_plugins 00:04:01.447 ************************************ 00:04:01.447 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:01.447 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:01.447 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.447 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.447 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.448 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:01.448 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.448 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:01.448 { 00:04:01.448 "name": "Malloc1", 00:04:01.448 "aliases": [ 00:04:01.448 "1a93cc1c-f3ab-4806-a979-71e931cd0e06" 00:04:01.448 ], 00:04:01.448 "product_name": "Malloc disk", 00:04:01.448 "block_size": 4096, 00:04:01.448 "num_blocks": 256, 00:04:01.448 "uuid": "1a93cc1c-f3ab-4806-a979-71e931cd0e06", 00:04:01.448 "assigned_rate_limits": { 00:04:01.448 "rw_ios_per_sec": 0, 00:04:01.448 "rw_mbytes_per_sec": 0, 00:04:01.448 "r_mbytes_per_sec": 0, 00:04:01.448 "w_mbytes_per_sec": 0 00:04:01.448 }, 00:04:01.448 "claimed": false, 00:04:01.448 "zoned": false, 00:04:01.448 "supported_io_types": { 00:04:01.448 "read": true, 00:04:01.448 "write": true, 00:04:01.448 "unmap": true, 00:04:01.448 "flush": true, 00:04:01.448 "reset": true, 00:04:01.448 "nvme_admin": false, 00:04:01.448 "nvme_io": false, 00:04:01.448 "nvme_io_md": false, 00:04:01.448 "write_zeroes": true, 00:04:01.448 "zcopy": true, 00:04:01.448 "get_zone_info": false, 00:04:01.448 "zone_management": false, 00:04:01.448 "zone_append": false, 00:04:01.448 "compare": false, 00:04:01.448 "compare_and_write": false, 00:04:01.448 "abort": true, 00:04:01.448 "seek_hole": false, 00:04:01.448 "seek_data": false, 00:04:01.448 "copy": true, 00:04:01.448 "nvme_iov_md": false 00:04:01.448 }, 00:04:01.448 "memory_domains": [ 00:04:01.448 { 00:04:01.448 "dma_device_id": "system", 00:04:01.448 "dma_device_type": 1 00:04:01.448 }, 00:04:01.448 { 00:04:01.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.448 "dma_device_type": 2 00:04:01.448 } 00:04:01.448 ], 00:04:01.448 "driver_specific": {} 00:04:01.448 } 00:04:01.448 ]' 00:04:01.448 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:01.448 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:01.448 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.448 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.448 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:01.448 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:01.448 10:16:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:01.448 00:04:01.448 real 0m0.112s 00:04:01.448 user 0m0.073s 00:04:01.448 sys 0m0.012s 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.448 10:16:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.448 ************************************ 00:04:01.448 END TEST rpc_plugins 00:04:01.448 ************************************ 00:04:01.448 10:16:55 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.448 10:16:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:01.448 10:16:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.448 10:16:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.448 10:16:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.448 ************************************ 00:04:01.448 START TEST rpc_trace_cmd_test 00:04:01.448 ************************************ 00:04:01.448 10:16:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:01.448 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:01.448 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:01.448 10:16:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.448 10:16:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:01.448 10:16:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.448 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:01.448 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2184186", 00:04:01.448 "tpoint_group_mask": "0x8", 00:04:01.448 "iscsi_conn": { 00:04:01.448 "mask": "0x2", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "scsi": { 00:04:01.448 "mask": "0x4", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "bdev": { 00:04:01.448 "mask": "0x8", 00:04:01.448 "tpoint_mask": "0xffffffffffffffff" 00:04:01.448 }, 00:04:01.448 "nvmf_rdma": { 00:04:01.448 "mask": "0x10", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "nvmf_tcp": { 00:04:01.448 "mask": "0x20", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "ftl": { 00:04:01.448 "mask": "0x40", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "blobfs": { 00:04:01.448 "mask": "0x80", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "dsa": { 00:04:01.448 "mask": "0x200", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "thread": { 00:04:01.448 "mask": "0x400", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "nvme_pcie": { 00:04:01.448 "mask": "0x800", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "iaa": { 00:04:01.448 "mask": "0x1000", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "nvme_tcp": { 00:04:01.448 "mask": "0x2000", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "bdev_nvme": { 00:04:01.448 "mask": "0x4000", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 }, 00:04:01.448 "sock": { 00:04:01.448 "mask": "0x8000", 00:04:01.448 "tpoint_mask": "0x0" 00:04:01.448 } 00:04:01.448 }' 00:04:01.448 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:01.448 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:01.448 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:01.706 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:01.706 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:01.706 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:01.706 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:01.706 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:01.706 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:01.706 10:16:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:01.706 00:04:01.706 real 0m0.201s 00:04:01.706 user 0m0.171s 00:04:01.706 sys 0m0.020s 00:04:01.706 10:16:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.706 10:16:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:01.706 ************************************ 00:04:01.706 END TEST rpc_trace_cmd_test 00:04:01.706 ************************************ 00:04:01.706 10:16:56 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.706 10:16:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:01.706 10:16:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:01.706 10:16:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:01.706 10:16:56 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.706 10:16:56 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.706 10:16:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.706 ************************************ 00:04:01.706 START TEST rpc_daemon_integrity 00:04:01.706 ************************************ 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.706 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.706 { 00:04:01.706 "name": "Malloc2", 00:04:01.706 "aliases": [ 00:04:01.706 "898d4417-ccd4-430c-be65-9aca8ed93052" 00:04:01.706 ], 00:04:01.706 "product_name": "Malloc disk", 00:04:01.706 "block_size": 512, 00:04:01.706 "num_blocks": 16384, 00:04:01.706 "uuid": "898d4417-ccd4-430c-be65-9aca8ed93052", 00:04:01.706 "assigned_rate_limits": { 00:04:01.706 "rw_ios_per_sec": 0, 00:04:01.706 "rw_mbytes_per_sec": 0, 00:04:01.706 "r_mbytes_per_sec": 0, 00:04:01.706 "w_mbytes_per_sec": 0 00:04:01.706 }, 00:04:01.706 "claimed": false, 00:04:01.706 "zoned": false, 00:04:01.706 "supported_io_types": { 00:04:01.706 "read": true, 00:04:01.706 "write": true, 00:04:01.706 "unmap": true, 00:04:01.706 "flush": true, 00:04:01.706 "reset": true, 00:04:01.706 "nvme_admin": false, 00:04:01.706 "nvme_io": false, 00:04:01.706 "nvme_io_md": false, 00:04:01.706 "write_zeroes": true, 00:04:01.706 "zcopy": true, 00:04:01.706 "get_zone_info": false, 00:04:01.706 "zone_management": false, 00:04:01.706 "zone_append": false, 00:04:01.706 "compare": false, 00:04:01.706 "compare_and_write": false, 00:04:01.706 "abort": true, 00:04:01.706 "seek_hole": false, 00:04:01.706 "seek_data": false, 00:04:01.706 "copy": true, 00:04:01.706 "nvme_iov_md": false 00:04:01.706 }, 00:04:01.706 "memory_domains": [ 00:04:01.706 { 00:04:01.706 "dma_device_id": "system", 00:04:01.707 "dma_device_type": 1 00:04:01.707 }, 00:04:01.707 { 00:04:01.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.707 "dma_device_type": 2 00:04:01.707 } 00:04:01.707 ], 00:04:01.707 "driver_specific": {} 00:04:01.707 } 00:04:01.707 ]' 00:04:01.707 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.964 [2024-07-15 10:16:56.369271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:01.964 [2024-07-15 10:16:56.369315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.964 [2024-07-15 10:16:56.369344] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b45980 00:04:01.964 [2024-07-15 10:16:56.369359] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.964 [2024-07-15 10:16:56.370688] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.964 [2024-07-15 10:16:56.370719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.964 Passthru0 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.964 { 00:04:01.964 "name": "Malloc2", 00:04:01.964 "aliases": [ 00:04:01.964 "898d4417-ccd4-430c-be65-9aca8ed93052" 00:04:01.964 ], 00:04:01.964 "product_name": "Malloc disk", 00:04:01.964 "block_size": 512, 00:04:01.964 "num_blocks": 16384, 00:04:01.964 "uuid": "898d4417-ccd4-430c-be65-9aca8ed93052", 00:04:01.964 "assigned_rate_limits": { 00:04:01.964 "rw_ios_per_sec": 0, 00:04:01.964 "rw_mbytes_per_sec": 0, 00:04:01.964 "r_mbytes_per_sec": 0, 00:04:01.964 "w_mbytes_per_sec": 0 00:04:01.964 }, 00:04:01.964 "claimed": true, 00:04:01.964 "claim_type": "exclusive_write", 00:04:01.964 "zoned": false, 00:04:01.964 "supported_io_types": { 00:04:01.964 "read": true, 00:04:01.964 "write": true, 00:04:01.964 "unmap": true, 00:04:01.964 "flush": true, 00:04:01.964 "reset": true, 00:04:01.964 "nvme_admin": false, 00:04:01.964 "nvme_io": false, 00:04:01.964 "nvme_io_md": false, 00:04:01.964 "write_zeroes": true, 00:04:01.964 "zcopy": true, 00:04:01.964 "get_zone_info": false, 00:04:01.964 "zone_management": false, 00:04:01.964 "zone_append": false, 00:04:01.964 "compare": false, 00:04:01.964 "compare_and_write": false, 00:04:01.964 "abort": true, 00:04:01.964 "seek_hole": false, 00:04:01.964 "seek_data": false, 00:04:01.964 "copy": true, 00:04:01.964 "nvme_iov_md": false 00:04:01.964 }, 00:04:01.964 "memory_domains": [ 00:04:01.964 { 00:04:01.964 "dma_device_id": "system", 00:04:01.964 "dma_device_type": 1 00:04:01.964 }, 00:04:01.964 { 00:04:01.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.964 "dma_device_type": 2 00:04:01.964 } 00:04:01.964 ], 00:04:01.964 "driver_specific": {} 00:04:01.964 }, 00:04:01.964 { 00:04:01.964 "name": "Passthru0", 00:04:01.964 "aliases": [ 00:04:01.964 "7deba0f5-920b-574c-acf9-ada71e797eca" 00:04:01.964 ], 00:04:01.964 "product_name": "passthru", 00:04:01.964 "block_size": 512, 00:04:01.964 "num_blocks": 16384, 00:04:01.964 "uuid": "7deba0f5-920b-574c-acf9-ada71e797eca", 00:04:01.964 "assigned_rate_limits": { 00:04:01.964 "rw_ios_per_sec": 0, 00:04:01.964 "rw_mbytes_per_sec": 0, 00:04:01.964 "r_mbytes_per_sec": 0, 00:04:01.964 "w_mbytes_per_sec": 0 00:04:01.964 }, 00:04:01.964 "claimed": false, 00:04:01.964 "zoned": false, 00:04:01.964 "supported_io_types": { 00:04:01.964 "read": true, 00:04:01.964 "write": true, 00:04:01.964 "unmap": true, 00:04:01.964 "flush": true, 00:04:01.964 "reset": true, 00:04:01.964 "nvme_admin": false, 00:04:01.964 "nvme_io": false, 00:04:01.964 "nvme_io_md": false, 00:04:01.964 "write_zeroes": true, 00:04:01.964 "zcopy": true, 00:04:01.964 "get_zone_info": false, 00:04:01.964 "zone_management": false, 00:04:01.964 "zone_append": false, 00:04:01.964 "compare": false, 00:04:01.964 "compare_and_write": false, 00:04:01.964 "abort": true, 00:04:01.964 "seek_hole": false, 00:04:01.964 "seek_data": false, 00:04:01.964 "copy": true, 00:04:01.964 "nvme_iov_md": false 00:04:01.964 }, 00:04:01.964 "memory_domains": [ 00:04:01.964 { 00:04:01.964 "dma_device_id": "system", 00:04:01.964 "dma_device_type": 1 00:04:01.964 }, 00:04:01.964 { 00:04:01.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.964 "dma_device_type": 2 00:04:01.964 } 00:04:01.964 ], 00:04:01.964 "driver_specific": { 00:04:01.964 "passthru": { 00:04:01.964 "name": "Passthru0", 00:04:01.964 "base_bdev_name": "Malloc2" 00:04:01.964 } 00:04:01.964 } 00:04:01.964 } 00:04:01.964 ]' 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.964 00:04:01.964 real 0m0.228s 00:04:01.964 user 0m0.150s 00:04:01.964 sys 0m0.023s 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.964 10:16:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.964 ************************************ 00:04:01.964 END TEST rpc_daemon_integrity 00:04:01.964 ************************************ 00:04:01.964 10:16:56 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.964 10:16:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:01.964 10:16:56 rpc -- rpc/rpc.sh@84 -- # killprocess 2184186 00:04:01.964 10:16:56 rpc -- common/autotest_common.sh@948 -- # '[' -z 2184186 ']' 00:04:01.964 10:16:56 rpc -- common/autotest_common.sh@952 -- # kill -0 2184186 00:04:01.964 10:16:56 rpc -- common/autotest_common.sh@953 -- # uname 00:04:01.964 10:16:56 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:01.965 10:16:56 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2184186 00:04:01.965 10:16:56 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:01.965 10:16:56 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:01.965 10:16:56 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2184186' 00:04:01.965 killing process with pid 2184186 00:04:01.965 10:16:56 rpc -- common/autotest_common.sh@967 -- # kill 2184186 00:04:01.965 10:16:56 rpc -- common/autotest_common.sh@972 -- # wait 2184186 00:04:02.530 00:04:02.530 real 0m1.984s 00:04:02.530 user 0m2.448s 00:04:02.530 sys 0m0.600s 00:04:02.530 10:16:57 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.530 10:16:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.530 ************************************ 00:04:02.530 END TEST rpc 00:04:02.530 ************************************ 00:04:02.530 10:16:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:02.530 10:16:57 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:02.530 10:16:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.530 10:16:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.530 10:16:57 -- common/autotest_common.sh@10 -- # set +x 00:04:02.530 ************************************ 00:04:02.530 START TEST skip_rpc 00:04:02.530 ************************************ 00:04:02.530 10:16:57 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:02.530 * Looking for test storage... 00:04:02.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:02.530 10:16:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:02.530 10:16:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.530 10:16:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:02.530 10:16:57 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.530 10:16:57 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.530 10:16:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.530 ************************************ 00:04:02.530 START TEST skip_rpc 00:04:02.530 ************************************ 00:04:02.530 10:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:02.530 10:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2184504 00:04:02.530 10:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:02.530 10:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.530 10:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:02.788 [2024-07-15 10:16:57.182757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:02.788 [2024-07-15 10:16:57.182842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184504 ] 00:04:02.788 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.788 [2024-07-15 10:16:57.240542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.788 [2024-07-15 10:16:57.351659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2184504 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2184504 ']' 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2184504 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2184504 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2184504' 00:04:08.095 killing process with pid 2184504 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2184504 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2184504 00:04:08.095 00:04:08.095 real 0m5.489s 00:04:08.095 user 0m5.179s 00:04:08.095 sys 0m0.313s 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.095 10:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.095 ************************************ 00:04:08.095 END TEST skip_rpc 00:04:08.095 ************************************ 00:04:08.095 10:17:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:08.095 10:17:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:08.095 10:17:02 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.095 10:17:02 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.095 10:17:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.096 ************************************ 00:04:08.096 START TEST skip_rpc_with_json 00:04:08.096 ************************************ 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2185191 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2185191 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2185191 ']' 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:08.096 10:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.096 [2024-07-15 10:17:02.719419] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:08.096 [2024-07-15 10:17:02.719508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185191 ] 00:04:08.353 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.353 [2024-07-15 10:17:02.776932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.353 [2024-07-15 10:17:02.888334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.611 [2024-07-15 10:17:03.144324] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:08.611 request: 00:04:08.611 { 00:04:08.611 "trtype": "tcp", 00:04:08.611 "method": "nvmf_get_transports", 00:04:08.611 "req_id": 1 00:04:08.611 } 00:04:08.611 Got JSON-RPC error response 00:04:08.611 response: 00:04:08.611 { 00:04:08.611 "code": -19, 00:04:08.611 "message": "No such device" 00:04:08.611 } 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.611 [2024-07-15 10:17:03.152461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.611 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.869 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.869 10:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.869 { 00:04:08.869 "subsystems": [ 00:04:08.869 { 00:04:08.869 "subsystem": "vfio_user_target", 00:04:08.869 "config": null 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "subsystem": "keyring", 00:04:08.869 "config": [] 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "subsystem": "iobuf", 00:04:08.869 "config": [ 00:04:08.869 { 00:04:08.869 "method": "iobuf_set_options", 00:04:08.869 "params": { 00:04:08.869 "small_pool_count": 8192, 00:04:08.869 "large_pool_count": 1024, 00:04:08.869 "small_bufsize": 8192, 00:04:08.869 "large_bufsize": 135168 00:04:08.869 } 00:04:08.869 } 00:04:08.869 ] 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "subsystem": "sock", 00:04:08.869 "config": [ 00:04:08.869 { 00:04:08.869 "method": "sock_set_default_impl", 00:04:08.869 "params": { 00:04:08.869 "impl_name": "posix" 00:04:08.869 } 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "method": "sock_impl_set_options", 00:04:08.869 "params": { 00:04:08.869 "impl_name": "ssl", 00:04:08.869 "recv_buf_size": 4096, 00:04:08.869 "send_buf_size": 4096, 00:04:08.869 "enable_recv_pipe": true, 00:04:08.869 "enable_quickack": false, 00:04:08.869 "enable_placement_id": 0, 00:04:08.869 "enable_zerocopy_send_server": true, 00:04:08.869 "enable_zerocopy_send_client": false, 00:04:08.869 "zerocopy_threshold": 0, 00:04:08.869 "tls_version": 0, 00:04:08.869 "enable_ktls": false 00:04:08.869 } 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "method": "sock_impl_set_options", 00:04:08.869 "params": { 00:04:08.869 "impl_name": "posix", 00:04:08.869 "recv_buf_size": 2097152, 00:04:08.869 "send_buf_size": 2097152, 00:04:08.869 "enable_recv_pipe": true, 00:04:08.869 "enable_quickack": false, 00:04:08.869 "enable_placement_id": 0, 00:04:08.869 "enable_zerocopy_send_server": true, 00:04:08.869 "enable_zerocopy_send_client": false, 00:04:08.869 "zerocopy_threshold": 0, 00:04:08.869 "tls_version": 0, 00:04:08.869 "enable_ktls": false 00:04:08.869 } 00:04:08.869 } 00:04:08.869 ] 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "subsystem": "vmd", 00:04:08.869 "config": [] 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "subsystem": "accel", 00:04:08.869 "config": [ 00:04:08.869 { 00:04:08.869 "method": "accel_set_options", 00:04:08.869 "params": { 00:04:08.869 "small_cache_size": 128, 00:04:08.869 "large_cache_size": 16, 00:04:08.869 "task_count": 2048, 00:04:08.869 "sequence_count": 2048, 00:04:08.869 "buf_count": 2048 00:04:08.869 } 00:04:08.869 } 00:04:08.869 ] 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "subsystem": "bdev", 00:04:08.869 "config": [ 00:04:08.869 { 00:04:08.869 "method": "bdev_set_options", 00:04:08.869 "params": { 00:04:08.869 "bdev_io_pool_size": 65535, 00:04:08.869 "bdev_io_cache_size": 256, 00:04:08.869 "bdev_auto_examine": true, 00:04:08.869 "iobuf_small_cache_size": 128, 00:04:08.869 "iobuf_large_cache_size": 16 00:04:08.869 } 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "method": "bdev_raid_set_options", 00:04:08.869 "params": { 00:04:08.869 "process_window_size_kb": 1024 00:04:08.869 } 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "method": "bdev_iscsi_set_options", 00:04:08.869 "params": { 00:04:08.869 "timeout_sec": 30 00:04:08.869 } 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "method": "bdev_nvme_set_options", 00:04:08.869 "params": { 00:04:08.869 "action_on_timeout": "none", 00:04:08.869 "timeout_us": 0, 00:04:08.869 "timeout_admin_us": 0, 00:04:08.869 "keep_alive_timeout_ms": 10000, 00:04:08.869 "arbitration_burst": 0, 00:04:08.869 "low_priority_weight": 0, 00:04:08.869 "medium_priority_weight": 0, 00:04:08.869 "high_priority_weight": 0, 00:04:08.869 "nvme_adminq_poll_period_us": 10000, 00:04:08.869 "nvme_ioq_poll_period_us": 0, 00:04:08.869 "io_queue_requests": 0, 00:04:08.869 "delay_cmd_submit": true, 00:04:08.869 "transport_retry_count": 4, 00:04:08.869 "bdev_retry_count": 3, 00:04:08.869 "transport_ack_timeout": 0, 00:04:08.869 "ctrlr_loss_timeout_sec": 0, 00:04:08.869 "reconnect_delay_sec": 0, 00:04:08.869 "fast_io_fail_timeout_sec": 0, 00:04:08.869 "disable_auto_failback": false, 00:04:08.869 "generate_uuids": false, 00:04:08.869 "transport_tos": 0, 00:04:08.869 "nvme_error_stat": false, 00:04:08.869 "rdma_srq_size": 0, 00:04:08.869 "io_path_stat": false, 00:04:08.869 "allow_accel_sequence": false, 00:04:08.869 "rdma_max_cq_size": 0, 00:04:08.869 "rdma_cm_event_timeout_ms": 0, 00:04:08.869 "dhchap_digests": [ 00:04:08.869 "sha256", 00:04:08.869 "sha384", 00:04:08.869 "sha512" 00:04:08.869 ], 00:04:08.869 "dhchap_dhgroups": [ 00:04:08.869 "null", 00:04:08.869 "ffdhe2048", 00:04:08.869 "ffdhe3072", 00:04:08.869 "ffdhe4096", 00:04:08.869 "ffdhe6144", 00:04:08.869 "ffdhe8192" 00:04:08.869 ] 00:04:08.869 } 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "method": "bdev_nvme_set_hotplug", 00:04:08.869 "params": { 00:04:08.869 "period_us": 100000, 00:04:08.869 "enable": false 00:04:08.869 } 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "method": "bdev_wait_for_examine" 00:04:08.869 } 00:04:08.869 ] 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "subsystem": "scsi", 00:04:08.869 "config": null 00:04:08.869 }, 00:04:08.869 { 00:04:08.869 "subsystem": "scheduler", 00:04:08.869 "config": [ 00:04:08.869 { 00:04:08.869 "method": "framework_set_scheduler", 00:04:08.869 "params": { 00:04:08.870 "name": "static" 00:04:08.870 } 00:04:08.870 } 00:04:08.870 ] 00:04:08.870 }, 00:04:08.870 { 00:04:08.870 "subsystem": "vhost_scsi", 00:04:08.870 "config": [] 00:04:08.870 }, 00:04:08.870 { 00:04:08.870 "subsystem": "vhost_blk", 00:04:08.870 "config": [] 00:04:08.870 }, 00:04:08.870 { 00:04:08.870 "subsystem": "ublk", 00:04:08.870 "config": [] 00:04:08.870 }, 00:04:08.870 { 00:04:08.870 "subsystem": "nbd", 00:04:08.870 "config": [] 00:04:08.870 }, 00:04:08.870 { 00:04:08.870 "subsystem": "nvmf", 00:04:08.870 "config": [ 00:04:08.870 { 00:04:08.870 "method": "nvmf_set_config", 00:04:08.870 "params": { 00:04:08.870 "discovery_filter": "match_any", 00:04:08.870 "admin_cmd_passthru": { 00:04:08.870 "identify_ctrlr": false 00:04:08.870 } 00:04:08.870 } 00:04:08.870 }, 00:04:08.870 { 00:04:08.870 "method": "nvmf_set_max_subsystems", 00:04:08.870 "params": { 00:04:08.870 "max_subsystems": 1024 00:04:08.870 } 00:04:08.870 }, 00:04:08.870 { 00:04:08.870 "method": "nvmf_set_crdt", 00:04:08.870 "params": { 00:04:08.870 "crdt1": 0, 00:04:08.870 "crdt2": 0, 00:04:08.870 "crdt3": 0 00:04:08.870 } 00:04:08.870 }, 00:04:08.870 { 00:04:08.870 "method": "nvmf_create_transport", 00:04:08.870 "params": { 00:04:08.870 "trtype": "TCP", 00:04:08.870 "max_queue_depth": 128, 00:04:08.870 "max_io_qpairs_per_ctrlr": 127, 00:04:08.870 "in_capsule_data_size": 4096, 00:04:08.870 "max_io_size": 131072, 00:04:08.870 "io_unit_size": 131072, 00:04:08.870 "max_aq_depth": 128, 00:04:08.870 "num_shared_buffers": 511, 00:04:08.870 "buf_cache_size": 4294967295, 00:04:08.870 "dif_insert_or_strip": false, 00:04:08.870 "zcopy": false, 00:04:08.870 "c2h_success": true, 00:04:08.870 "sock_priority": 0, 00:04:08.870 "abort_timeout_sec": 1, 00:04:08.870 "ack_timeout": 0, 00:04:08.870 "data_wr_pool_size": 0 00:04:08.870 } 00:04:08.870 } 00:04:08.870 ] 00:04:08.870 }, 00:04:08.870 { 00:04:08.870 "subsystem": "iscsi", 00:04:08.870 "config": [ 00:04:08.870 { 00:04:08.870 "method": "iscsi_set_options", 00:04:08.870 "params": { 00:04:08.870 "node_base": "iqn.2016-06.io.spdk", 00:04:08.870 "max_sessions": 128, 00:04:08.870 "max_connections_per_session": 2, 00:04:08.870 "max_queue_depth": 64, 00:04:08.870 "default_time2wait": 2, 00:04:08.870 "default_time2retain": 20, 00:04:08.870 "first_burst_length": 8192, 00:04:08.870 "immediate_data": true, 00:04:08.870 "allow_duplicated_isid": false, 00:04:08.870 "error_recovery_level": 0, 00:04:08.870 "nop_timeout": 60, 00:04:08.870 "nop_in_interval": 30, 00:04:08.870 "disable_chap": false, 00:04:08.870 "require_chap": false, 00:04:08.870 "mutual_chap": false, 00:04:08.870 "chap_group": 0, 00:04:08.870 "max_large_datain_per_connection": 64, 00:04:08.870 "max_r2t_per_connection": 4, 00:04:08.870 "pdu_pool_size": 36864, 00:04:08.870 "immediate_data_pool_size": 16384, 00:04:08.870 "data_out_pool_size": 2048 00:04:08.870 } 00:04:08.870 } 00:04:08.870 ] 00:04:08.870 } 00:04:08.870 ] 00:04:08.870 } 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2185191 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2185191 ']' 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2185191 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2185191 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2185191' 00:04:08.870 killing process with pid 2185191 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2185191 00:04:08.870 10:17:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2185191 00:04:09.434 10:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2185332 00:04:09.434 10:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:09.434 10:17:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2185332 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2185332 ']' 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2185332 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2185332 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2185332' 00:04:14.693 killing process with pid 2185332 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2185332 00:04:14.693 10:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2185332 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:14.693 00:04:14.693 real 0m6.619s 00:04:14.693 user 0m6.223s 00:04:14.693 sys 0m0.686s 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.693 ************************************ 00:04:14.693 END TEST skip_rpc_with_json 00:04:14.693 ************************************ 00:04:14.693 10:17:09 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:14.693 10:17:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:14.693 10:17:09 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.693 10:17:09 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.693 10:17:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.693 ************************************ 00:04:14.693 START TEST skip_rpc_with_delay 00:04:14.693 ************************************ 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:14.693 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.950 [2024-07-15 10:17:09.392162] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:14.951 [2024-07-15 10:17:09.392290] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:14.951 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:14.951 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:14.951 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:14.951 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:14.951 00:04:14.951 real 0m0.070s 00:04:14.951 user 0m0.045s 00:04:14.951 sys 0m0.025s 00:04:14.951 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.951 10:17:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:14.951 ************************************ 00:04:14.951 END TEST skip_rpc_with_delay 00:04:14.951 ************************************ 00:04:14.951 10:17:09 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:14.951 10:17:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:14.951 10:17:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:14.951 10:17:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:14.951 10:17:09 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.951 10:17:09 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.951 10:17:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.951 ************************************ 00:04:14.951 START TEST exit_on_failed_rpc_init 00:04:14.951 ************************************ 00:04:14.951 10:17:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:14.951 10:17:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2186050 00:04:14.951 10:17:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.951 10:17:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2186050 00:04:14.951 10:17:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2186050 ']' 00:04:14.951 10:17:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.951 10:17:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.951 10:17:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.951 10:17:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.951 10:17:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.951 [2024-07-15 10:17:09.510664] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:14.951 [2024-07-15 10:17:09.510763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186050 ] 00:04:14.951 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.951 [2024-07-15 10:17:09.572629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.208 [2024-07-15 10:17:09.686900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:16.140 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:16.140 [2024-07-15 10:17:10.495261] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:16.140 [2024-07-15 10:17:10.495348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186190 ] 00:04:16.140 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.140 [2024-07-15 10:17:10.555331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.140 [2024-07-15 10:17:10.674705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.140 [2024-07-15 10:17:10.674830] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:16.140 [2024-07-15 10:17:10.674857] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:16.140 [2024-07-15 10:17:10.674872] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2186050 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2186050 ']' 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2186050 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2186050 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2186050' 00:04:16.467 killing process with pid 2186050 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2186050 00:04:16.467 10:17:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2186050 00:04:16.725 00:04:16.725 real 0m1.825s 00:04:16.725 user 0m2.167s 00:04:16.725 sys 0m0.502s 00:04:16.725 10:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.725 10:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:16.725 ************************************ 00:04:16.725 END TEST exit_on_failed_rpc_init 00:04:16.725 ************************************ 00:04:16.725 10:17:11 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:16.725 10:17:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.725 00:04:16.725 real 0m14.248s 00:04:16.725 user 0m13.718s 00:04:16.725 sys 0m1.685s 00:04:16.725 10:17:11 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.725 10:17:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.725 ************************************ 00:04:16.725 END TEST skip_rpc 00:04:16.725 ************************************ 00:04:16.725 10:17:11 -- common/autotest_common.sh@1142 -- # return 0 00:04:16.725 10:17:11 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:16.725 10:17:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.725 10:17:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.725 10:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:16.725 ************************************ 00:04:16.725 START TEST rpc_client 00:04:16.725 ************************************ 00:04:16.725 10:17:11 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:17.046 * Looking for test storage... 00:04:17.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:17.046 10:17:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:17.046 OK 00:04:17.046 10:17:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:17.046 00:04:17.046 real 0m0.070s 00:04:17.046 user 0m0.030s 00:04:17.046 sys 0m0.046s 00:04:17.046 10:17:11 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.046 10:17:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:17.046 ************************************ 00:04:17.046 END TEST rpc_client 00:04:17.046 ************************************ 00:04:17.046 10:17:11 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.046 10:17:11 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:17.046 10:17:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.046 10:17:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.046 10:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:17.046 ************************************ 00:04:17.046 START TEST json_config 00:04:17.046 ************************************ 00:04:17.046 10:17:11 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:17.046 10:17:11 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:17.046 10:17:11 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:17.046 10:17:11 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:17.046 10:17:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.046 10:17:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.046 10:17:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.046 10:17:11 json_config -- paths/export.sh@5 -- # export PATH 00:04:17.046 10:17:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@47 -- # : 0 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:17.046 10:17:11 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:17.046 INFO: JSON configuration test init 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:17.046 10:17:11 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.046 10:17:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:17.046 10:17:11 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.046 10:17:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.046 10:17:11 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:17.046 10:17:11 json_config -- json_config/common.sh@9 -- # local app=target 00:04:17.046 10:17:11 json_config -- json_config/common.sh@10 -- # shift 00:04:17.046 10:17:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:17.046 10:17:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:17.046 10:17:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:17.046 10:17:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:17.046 10:17:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:17.046 10:17:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2186432 00:04:17.046 10:17:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:17.047 10:17:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:17.047 Waiting for target to run... 00:04:17.047 10:17:11 json_config -- json_config/common.sh@25 -- # waitforlisten 2186432 /var/tmp/spdk_tgt.sock 00:04:17.047 10:17:11 json_config -- common/autotest_common.sh@829 -- # '[' -z 2186432 ']' 00:04:17.047 10:17:11 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:17.047 10:17:11 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.047 10:17:11 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:17.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:17.047 10:17:11 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.047 10:17:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.047 [2024-07-15 10:17:11.575493] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:17.047 [2024-07-15 10:17:11.575581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186432 ] 00:04:17.047 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.305 [2024-07-15 10:17:11.917313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.561 [2024-07-15 10:17:12.006778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.126 10:17:12 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.126 10:17:12 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:18.126 10:17:12 json_config -- json_config/common.sh@26 -- # echo '' 00:04:18.126 00:04:18.126 10:17:12 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:18.126 10:17:12 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:18.126 10:17:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.126 10:17:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.126 10:17:12 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:18.126 10:17:12 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:18.126 10:17:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.126 10:17:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.126 10:17:12 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:18.126 10:17:12 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:18.126 10:17:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:21.420 10:17:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.420 10:17:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:21.420 10:17:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:21.420 10:17:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.420 10:17:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:21.420 10:17:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.420 10:17:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:21.420 10:17:15 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:21.420 10:17:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:21.678 MallocForNvmf0 00:04:21.678 10:17:16 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:21.678 10:17:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:21.935 MallocForNvmf1 00:04:21.935 10:17:16 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:21.935 10:17:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:22.192 [2024-07-15 10:17:16.679805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.192 10:17:16 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:22.192 10:17:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:22.450 10:17:16 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:22.450 10:17:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:22.707 10:17:17 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:22.707 10:17:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:22.964 10:17:17 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:22.964 10:17:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:23.222 [2024-07-15 10:17:17.655021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:23.222 10:17:17 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:23.222 10:17:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:23.222 10:17:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.222 10:17:17 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:23.222 10:17:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:23.222 10:17:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.222 10:17:17 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:23.222 10:17:17 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:23.222 10:17:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:23.480 MallocBdevForConfigChangeCheck 00:04:23.480 10:17:17 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:23.480 10:17:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:23.480 10:17:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.480 10:17:17 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:23.480 10:17:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.737 10:17:18 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:23.737 INFO: shutting down applications... 00:04:23.737 10:17:18 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:23.737 10:17:18 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:23.737 10:17:18 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:23.737 10:17:18 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:25.635 Calling clear_iscsi_subsystem 00:04:25.635 Calling clear_nvmf_subsystem 00:04:25.635 Calling clear_nbd_subsystem 00:04:25.635 Calling clear_ublk_subsystem 00:04:25.635 Calling clear_vhost_blk_subsystem 00:04:25.635 Calling clear_vhost_scsi_subsystem 00:04:25.635 Calling clear_bdev_subsystem 00:04:25.635 10:17:19 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:25.635 10:17:19 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:25.635 10:17:19 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:25.635 10:17:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.635 10:17:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:25.635 10:17:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:25.892 10:17:20 json_config -- json_config/json_config.sh@345 -- # break 00:04:25.892 10:17:20 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:25.893 10:17:20 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:25.893 10:17:20 json_config -- json_config/common.sh@31 -- # local app=target 00:04:25.893 10:17:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:25.893 10:17:20 json_config -- json_config/common.sh@35 -- # [[ -n 2186432 ]] 00:04:25.893 10:17:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2186432 00:04:25.893 10:17:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:25.893 10:17:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.893 10:17:20 json_config -- json_config/common.sh@41 -- # kill -0 2186432 00:04:25.893 10:17:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.460 10:17:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.460 10:17:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.460 10:17:20 json_config -- json_config/common.sh@41 -- # kill -0 2186432 00:04:26.460 10:17:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:26.460 10:17:20 json_config -- json_config/common.sh@43 -- # break 00:04:26.460 10:17:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:26.460 10:17:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:26.460 SPDK target shutdown done 00:04:26.460 10:17:20 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:26.460 INFO: relaunching applications... 00:04:26.460 10:17:20 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.460 10:17:20 json_config -- json_config/common.sh@9 -- # local app=target 00:04:26.460 10:17:20 json_config -- json_config/common.sh@10 -- # shift 00:04:26.460 10:17:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.460 10:17:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.460 10:17:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.460 10:17:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.460 10:17:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.460 10:17:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2187620 00:04:26.460 10:17:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.460 10:17:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.460 Waiting for target to run... 00:04:26.460 10:17:20 json_config -- json_config/common.sh@25 -- # waitforlisten 2187620 /var/tmp/spdk_tgt.sock 00:04:26.460 10:17:20 json_config -- common/autotest_common.sh@829 -- # '[' -z 2187620 ']' 00:04:26.460 10:17:20 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.460 10:17:20 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.460 10:17:20 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.460 10:17:20 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.460 10:17:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.460 [2024-07-15 10:17:20.937550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:26.460 [2024-07-15 10:17:20.937652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187620 ] 00:04:26.460 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.025 [2024-07-15 10:17:21.435109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.025 [2024-07-15 10:17:21.542461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.304 [2024-07-15 10:17:24.587001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.304 [2024-07-15 10:17:24.619472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:30.867 10:17:25 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.867 10:17:25 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:30.867 10:17:25 json_config -- json_config/common.sh@26 -- # echo '' 00:04:30.867 00:04:30.867 10:17:25 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:30.867 10:17:25 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:30.867 INFO: Checking if target configuration is the same... 00:04:30.867 10:17:25 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.867 10:17:25 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:30.867 10:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.867 + '[' 2 -ne 2 ']' 00:04:30.867 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:30.867 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:30.867 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:30.867 +++ basename /dev/fd/62 00:04:30.867 ++ mktemp /tmp/62.XXX 00:04:30.867 + tmp_file_1=/tmp/62.NvV 00:04:30.867 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.867 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:30.867 + tmp_file_2=/tmp/spdk_tgt_config.json.Y38 00:04:30.867 + ret=0 00:04:30.867 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.124 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.380 + diff -u /tmp/62.NvV /tmp/spdk_tgt_config.json.Y38 00:04:31.380 + echo 'INFO: JSON config files are the same' 00:04:31.380 INFO: JSON config files are the same 00:04:31.380 + rm /tmp/62.NvV /tmp/spdk_tgt_config.json.Y38 00:04:31.380 + exit 0 00:04:31.380 10:17:25 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:31.380 10:17:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:31.380 INFO: changing configuration and checking if this can be detected... 00:04:31.380 10:17:25 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.380 10:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.637 10:17:26 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.637 10:17:26 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:31.637 10:17:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.637 + '[' 2 -ne 2 ']' 00:04:31.637 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:31.637 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:31.637 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.637 +++ basename /dev/fd/62 00:04:31.637 ++ mktemp /tmp/62.XXX 00:04:31.637 + tmp_file_1=/tmp/62.QTA 00:04:31.637 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.637 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.637 + tmp_file_2=/tmp/spdk_tgt_config.json.AWu 00:04:31.637 + ret=0 00:04:31.637 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.893 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.893 + diff -u /tmp/62.QTA /tmp/spdk_tgt_config.json.AWu 00:04:31.893 + ret=1 00:04:31.893 + echo '=== Start of file: /tmp/62.QTA ===' 00:04:31.893 + cat /tmp/62.QTA 00:04:31.893 + echo '=== End of file: /tmp/62.QTA ===' 00:04:31.893 + echo '' 00:04:31.893 + echo '=== Start of file: /tmp/spdk_tgt_config.json.AWu ===' 00:04:31.893 + cat /tmp/spdk_tgt_config.json.AWu 00:04:31.893 + echo '=== End of file: /tmp/spdk_tgt_config.json.AWu ===' 00:04:31.893 + echo '' 00:04:31.893 + rm /tmp/62.QTA /tmp/spdk_tgt_config.json.AWu 00:04:31.893 + exit 1 00:04:31.893 10:17:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:31.893 INFO: configuration change detected. 00:04:31.893 10:17:26 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@317 -- # [[ -n 2187620 ]] 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.894 10:17:26 json_config -- json_config/json_config.sh@323 -- # killprocess 2187620 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@948 -- # '[' -z 2187620 ']' 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@952 -- # kill -0 2187620 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@953 -- # uname 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.894 10:17:26 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2187620 00:04:32.150 10:17:26 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.150 10:17:26 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.150 10:17:26 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2187620' 00:04:32.150 killing process with pid 2187620 00:04:32.150 10:17:26 json_config -- common/autotest_common.sh@967 -- # kill 2187620 00:04:32.150 10:17:26 json_config -- common/autotest_common.sh@972 -- # wait 2187620 00:04:33.518 10:17:28 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.518 10:17:28 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:33.518 10:17:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.518 10:17:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.776 10:17:28 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:33.776 10:17:28 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:33.776 INFO: Success 00:04:33.776 00:04:33.776 real 0m16.718s 00:04:33.776 user 0m18.724s 00:04:33.776 sys 0m2.015s 00:04:33.776 10:17:28 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.776 10:17:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.776 ************************************ 00:04:33.776 END TEST json_config 00:04:33.776 ************************************ 00:04:33.776 10:17:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:33.776 10:17:28 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:33.776 10:17:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.776 10:17:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.776 10:17:28 -- common/autotest_common.sh@10 -- # set +x 00:04:33.776 ************************************ 00:04:33.776 START TEST json_config_extra_key 00:04:33.776 ************************************ 00:04:33.776 10:17:28 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:33.776 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.776 10:17:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:33.777 10:17:28 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.777 10:17:28 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.777 10:17:28 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.777 10:17:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.777 10:17:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.777 10:17:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.777 10:17:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:33.777 10:17:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:33.777 10:17:28 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:33.777 INFO: launching applications... 00:04:33.777 10:17:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2188668 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.777 Waiting for target to run... 00:04:33.777 10:17:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2188668 /var/tmp/spdk_tgt.sock 00:04:33.777 10:17:28 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2188668 ']' 00:04:33.777 10:17:28 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.777 10:17:28 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.777 10:17:28 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.777 10:17:28 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.777 10:17:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.777 [2024-07-15 10:17:28.338067] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:33.777 [2024-07-15 10:17:28.338154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188668 ] 00:04:33.777 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.035 [2024-07-15 10:17:28.671523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.293 [2024-07-15 10:17:28.760537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.888 10:17:29 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.888 10:17:29 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:34.888 10:17:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:34.888 00:04:34.888 10:17:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:34.888 INFO: shutting down applications... 00:04:34.888 10:17:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:34.888 10:17:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:34.888 10:17:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.888 10:17:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2188668 ]] 00:04:34.888 10:17:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2188668 00:04:34.888 10:17:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.888 10:17:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.888 10:17:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2188668 00:04:34.888 10:17:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.189 10:17:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.189 10:17:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.189 10:17:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2188668 00:04:35.189 10:17:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.755 10:17:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.755 10:17:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.755 10:17:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2188668 00:04:35.755 10:17:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:35.755 10:17:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:35.755 10:17:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:35.755 10:17:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:35.755 SPDK target shutdown done 00:04:35.755 10:17:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:35.755 Success 00:04:35.755 00:04:35.755 real 0m2.034s 00:04:35.755 user 0m1.571s 00:04:35.755 sys 0m0.412s 00:04:35.755 10:17:30 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.755 10:17:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.755 ************************************ 00:04:35.755 END TEST json_config_extra_key 00:04:35.755 ************************************ 00:04:35.755 10:17:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:35.755 10:17:30 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.755 10:17:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.755 10:17:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.755 10:17:30 -- common/autotest_common.sh@10 -- # set +x 00:04:35.755 ************************************ 00:04:35.755 START TEST alias_rpc 00:04:35.755 ************************************ 00:04:35.755 10:17:30 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.755 * Looking for test storage... 00:04:35.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:35.755 10:17:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:35.755 10:17:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2188979 00:04:35.755 10:17:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.755 10:17:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2188979 00:04:35.755 10:17:30 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2188979 ']' 00:04:35.755 10:17:30 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.755 10:17:30 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.755 10:17:30 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.755 10:17:30 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.755 10:17:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.013 [2024-07-15 10:17:30.421262] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:36.013 [2024-07-15 10:17:30.421338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188979 ] 00:04:36.013 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.013 [2024-07-15 10:17:30.477469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.013 [2024-07-15 10:17:30.583155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.271 10:17:30 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.271 10:17:30 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:36.271 10:17:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:36.528 10:17:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2188979 00:04:36.528 10:17:31 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2188979 ']' 00:04:36.528 10:17:31 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2188979 00:04:36.528 10:17:31 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:36.528 10:17:31 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.528 10:17:31 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2188979 00:04:36.528 10:17:31 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.528 10:17:31 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.528 10:17:31 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2188979' 00:04:36.528 killing process with pid 2188979 00:04:36.528 10:17:31 alias_rpc -- common/autotest_common.sh@967 -- # kill 2188979 00:04:36.528 10:17:31 alias_rpc -- common/autotest_common.sh@972 -- # wait 2188979 00:04:37.093 00:04:37.093 real 0m1.262s 00:04:37.093 user 0m1.345s 00:04:37.093 sys 0m0.413s 00:04:37.093 10:17:31 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.093 10:17:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.093 ************************************ 00:04:37.093 END TEST alias_rpc 00:04:37.093 ************************************ 00:04:37.093 10:17:31 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.093 10:17:31 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:37.093 10:17:31 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:37.093 10:17:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.093 10:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.093 10:17:31 -- common/autotest_common.sh@10 -- # set +x 00:04:37.093 ************************************ 00:04:37.093 START TEST spdkcli_tcp 00:04:37.093 ************************************ 00:04:37.093 10:17:31 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:37.093 * Looking for test storage... 00:04:37.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:37.093 10:17:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:37.093 10:17:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:37.093 10:17:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:37.093 10:17:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:37.093 10:17:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:37.093 10:17:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:37.093 10:17:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:37.093 10:17:31 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.093 10:17:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.093 10:17:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2189170 00:04:37.093 10:17:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:37.093 10:17:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2189170 00:04:37.093 10:17:31 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2189170 ']' 00:04:37.093 10:17:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.093 10:17:31 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.093 10:17:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.093 10:17:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.093 10:17:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.093 [2024-07-15 10:17:31.728121] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:37.093 [2024-07-15 10:17:31.728232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189170 ] 00:04:37.351 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.351 [2024-07-15 10:17:31.786336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.351 [2024-07-15 10:17:31.891815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.351 [2024-07-15 10:17:31.891819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.608 10:17:32 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.608 10:17:32 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:37.608 10:17:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2189175 00:04:37.608 10:17:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:37.608 10:17:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:37.865 [ 00:04:37.865 "bdev_malloc_delete", 00:04:37.866 "bdev_malloc_create", 00:04:37.866 "bdev_null_resize", 00:04:37.866 "bdev_null_delete", 00:04:37.866 "bdev_null_create", 00:04:37.866 "bdev_nvme_cuse_unregister", 00:04:37.866 "bdev_nvme_cuse_register", 00:04:37.866 "bdev_opal_new_user", 00:04:37.866 "bdev_opal_set_lock_state", 00:04:37.866 "bdev_opal_delete", 00:04:37.866 "bdev_opal_get_info", 00:04:37.866 "bdev_opal_create", 00:04:37.866 "bdev_nvme_opal_revert", 00:04:37.866 "bdev_nvme_opal_init", 00:04:37.866 "bdev_nvme_send_cmd", 00:04:37.866 "bdev_nvme_get_path_iostat", 00:04:37.866 "bdev_nvme_get_mdns_discovery_info", 00:04:37.866 "bdev_nvme_stop_mdns_discovery", 00:04:37.866 "bdev_nvme_start_mdns_discovery", 00:04:37.866 "bdev_nvme_set_multipath_policy", 00:04:37.866 "bdev_nvme_set_preferred_path", 00:04:37.866 "bdev_nvme_get_io_paths", 00:04:37.866 "bdev_nvme_remove_error_injection", 00:04:37.866 "bdev_nvme_add_error_injection", 00:04:37.866 "bdev_nvme_get_discovery_info", 00:04:37.866 "bdev_nvme_stop_discovery", 00:04:37.866 "bdev_nvme_start_discovery", 00:04:37.866 "bdev_nvme_get_controller_health_info", 00:04:37.866 "bdev_nvme_disable_controller", 00:04:37.866 "bdev_nvme_enable_controller", 00:04:37.866 "bdev_nvme_reset_controller", 00:04:37.866 "bdev_nvme_get_transport_statistics", 00:04:37.866 "bdev_nvme_apply_firmware", 00:04:37.866 "bdev_nvme_detach_controller", 00:04:37.866 "bdev_nvme_get_controllers", 00:04:37.866 "bdev_nvme_attach_controller", 00:04:37.866 "bdev_nvme_set_hotplug", 00:04:37.866 "bdev_nvme_set_options", 00:04:37.866 "bdev_passthru_delete", 00:04:37.866 "bdev_passthru_create", 00:04:37.866 "bdev_lvol_set_parent_bdev", 00:04:37.866 "bdev_lvol_set_parent", 00:04:37.866 "bdev_lvol_check_shallow_copy", 00:04:37.866 "bdev_lvol_start_shallow_copy", 00:04:37.866 "bdev_lvol_grow_lvstore", 00:04:37.866 "bdev_lvol_get_lvols", 00:04:37.866 "bdev_lvol_get_lvstores", 00:04:37.866 "bdev_lvol_delete", 00:04:37.866 "bdev_lvol_set_read_only", 00:04:37.866 "bdev_lvol_resize", 00:04:37.866 "bdev_lvol_decouple_parent", 00:04:37.866 "bdev_lvol_inflate", 00:04:37.866 "bdev_lvol_rename", 00:04:37.866 "bdev_lvol_clone_bdev", 00:04:37.866 "bdev_lvol_clone", 00:04:37.866 "bdev_lvol_snapshot", 00:04:37.866 "bdev_lvol_create", 00:04:37.866 "bdev_lvol_delete_lvstore", 00:04:37.866 "bdev_lvol_rename_lvstore", 00:04:37.866 "bdev_lvol_create_lvstore", 00:04:37.866 "bdev_raid_set_options", 00:04:37.866 "bdev_raid_remove_base_bdev", 00:04:37.866 "bdev_raid_add_base_bdev", 00:04:37.866 "bdev_raid_delete", 00:04:37.866 "bdev_raid_create", 00:04:37.866 "bdev_raid_get_bdevs", 00:04:37.866 "bdev_error_inject_error", 00:04:37.866 "bdev_error_delete", 00:04:37.866 "bdev_error_create", 00:04:37.866 "bdev_split_delete", 00:04:37.866 "bdev_split_create", 00:04:37.866 "bdev_delay_delete", 00:04:37.866 "bdev_delay_create", 00:04:37.866 "bdev_delay_update_latency", 00:04:37.866 "bdev_zone_block_delete", 00:04:37.866 "bdev_zone_block_create", 00:04:37.866 "blobfs_create", 00:04:37.866 "blobfs_detect", 00:04:37.866 "blobfs_set_cache_size", 00:04:37.866 "bdev_aio_delete", 00:04:37.866 "bdev_aio_rescan", 00:04:37.866 "bdev_aio_create", 00:04:37.866 "bdev_ftl_set_property", 00:04:37.866 "bdev_ftl_get_properties", 00:04:37.866 "bdev_ftl_get_stats", 00:04:37.866 "bdev_ftl_unmap", 00:04:37.866 "bdev_ftl_unload", 00:04:37.866 "bdev_ftl_delete", 00:04:37.866 "bdev_ftl_load", 00:04:37.866 "bdev_ftl_create", 00:04:37.866 "bdev_virtio_attach_controller", 00:04:37.866 "bdev_virtio_scsi_get_devices", 00:04:37.866 "bdev_virtio_detach_controller", 00:04:37.866 "bdev_virtio_blk_set_hotplug", 00:04:37.866 "bdev_iscsi_delete", 00:04:37.866 "bdev_iscsi_create", 00:04:37.866 "bdev_iscsi_set_options", 00:04:37.866 "accel_error_inject_error", 00:04:37.866 "ioat_scan_accel_module", 00:04:37.866 "dsa_scan_accel_module", 00:04:37.866 "iaa_scan_accel_module", 00:04:37.866 "vfu_virtio_create_scsi_endpoint", 00:04:37.866 "vfu_virtio_scsi_remove_target", 00:04:37.866 "vfu_virtio_scsi_add_target", 00:04:37.866 "vfu_virtio_create_blk_endpoint", 00:04:37.866 "vfu_virtio_delete_endpoint", 00:04:37.866 "keyring_file_remove_key", 00:04:37.866 "keyring_file_add_key", 00:04:37.866 "keyring_linux_set_options", 00:04:37.866 "iscsi_get_histogram", 00:04:37.866 "iscsi_enable_histogram", 00:04:37.866 "iscsi_set_options", 00:04:37.866 "iscsi_get_auth_groups", 00:04:37.866 "iscsi_auth_group_remove_secret", 00:04:37.866 "iscsi_auth_group_add_secret", 00:04:37.866 "iscsi_delete_auth_group", 00:04:37.866 "iscsi_create_auth_group", 00:04:37.866 "iscsi_set_discovery_auth", 00:04:37.866 "iscsi_get_options", 00:04:37.866 "iscsi_target_node_request_logout", 00:04:37.866 "iscsi_target_node_set_redirect", 00:04:37.866 "iscsi_target_node_set_auth", 00:04:37.866 "iscsi_target_node_add_lun", 00:04:37.866 "iscsi_get_stats", 00:04:37.866 "iscsi_get_connections", 00:04:37.866 "iscsi_portal_group_set_auth", 00:04:37.866 "iscsi_start_portal_group", 00:04:37.866 "iscsi_delete_portal_group", 00:04:37.866 "iscsi_create_portal_group", 00:04:37.866 "iscsi_get_portal_groups", 00:04:37.866 "iscsi_delete_target_node", 00:04:37.866 "iscsi_target_node_remove_pg_ig_maps", 00:04:37.866 "iscsi_target_node_add_pg_ig_maps", 00:04:37.866 "iscsi_create_target_node", 00:04:37.866 "iscsi_get_target_nodes", 00:04:37.866 "iscsi_delete_initiator_group", 00:04:37.866 "iscsi_initiator_group_remove_initiators", 00:04:37.866 "iscsi_initiator_group_add_initiators", 00:04:37.866 "iscsi_create_initiator_group", 00:04:37.866 "iscsi_get_initiator_groups", 00:04:37.866 "nvmf_set_crdt", 00:04:37.866 "nvmf_set_config", 00:04:37.866 "nvmf_set_max_subsystems", 00:04:37.866 "nvmf_stop_mdns_prr", 00:04:37.866 "nvmf_publish_mdns_prr", 00:04:37.866 "nvmf_subsystem_get_listeners", 00:04:37.866 "nvmf_subsystem_get_qpairs", 00:04:37.866 "nvmf_subsystem_get_controllers", 00:04:37.866 "nvmf_get_stats", 00:04:37.866 "nvmf_get_transports", 00:04:37.866 "nvmf_create_transport", 00:04:37.866 "nvmf_get_targets", 00:04:37.866 "nvmf_delete_target", 00:04:37.866 "nvmf_create_target", 00:04:37.866 "nvmf_subsystem_allow_any_host", 00:04:37.866 "nvmf_subsystem_remove_host", 00:04:37.866 "nvmf_subsystem_add_host", 00:04:37.866 "nvmf_ns_remove_host", 00:04:37.866 "nvmf_ns_add_host", 00:04:37.866 "nvmf_subsystem_remove_ns", 00:04:37.866 "nvmf_subsystem_add_ns", 00:04:37.866 "nvmf_subsystem_listener_set_ana_state", 00:04:37.866 "nvmf_discovery_get_referrals", 00:04:37.866 "nvmf_discovery_remove_referral", 00:04:37.866 "nvmf_discovery_add_referral", 00:04:37.866 "nvmf_subsystem_remove_listener", 00:04:37.866 "nvmf_subsystem_add_listener", 00:04:37.866 "nvmf_delete_subsystem", 00:04:37.866 "nvmf_create_subsystem", 00:04:37.866 "nvmf_get_subsystems", 00:04:37.866 "env_dpdk_get_mem_stats", 00:04:37.866 "nbd_get_disks", 00:04:37.866 "nbd_stop_disk", 00:04:37.866 "nbd_start_disk", 00:04:37.866 "ublk_recover_disk", 00:04:37.866 "ublk_get_disks", 00:04:37.866 "ublk_stop_disk", 00:04:37.866 "ublk_start_disk", 00:04:37.866 "ublk_destroy_target", 00:04:37.866 "ublk_create_target", 00:04:37.866 "virtio_blk_create_transport", 00:04:37.866 "virtio_blk_get_transports", 00:04:37.866 "vhost_controller_set_coalescing", 00:04:37.866 "vhost_get_controllers", 00:04:37.866 "vhost_delete_controller", 00:04:37.866 "vhost_create_blk_controller", 00:04:37.866 "vhost_scsi_controller_remove_target", 00:04:37.866 "vhost_scsi_controller_add_target", 00:04:37.866 "vhost_start_scsi_controller", 00:04:37.866 "vhost_create_scsi_controller", 00:04:37.866 "thread_set_cpumask", 00:04:37.866 "framework_get_governor", 00:04:37.866 "framework_get_scheduler", 00:04:37.866 "framework_set_scheduler", 00:04:37.866 "framework_get_reactors", 00:04:37.866 "thread_get_io_channels", 00:04:37.866 "thread_get_pollers", 00:04:37.866 "thread_get_stats", 00:04:37.866 "framework_monitor_context_switch", 00:04:37.866 "spdk_kill_instance", 00:04:37.866 "log_enable_timestamps", 00:04:37.866 "log_get_flags", 00:04:37.866 "log_clear_flag", 00:04:37.866 "log_set_flag", 00:04:37.866 "log_get_level", 00:04:37.866 "log_set_level", 00:04:37.866 "log_get_print_level", 00:04:37.866 "log_set_print_level", 00:04:37.866 "framework_enable_cpumask_locks", 00:04:37.866 "framework_disable_cpumask_locks", 00:04:37.866 "framework_wait_init", 00:04:37.866 "framework_start_init", 00:04:37.866 "scsi_get_devices", 00:04:37.866 "bdev_get_histogram", 00:04:37.866 "bdev_enable_histogram", 00:04:37.866 "bdev_set_qos_limit", 00:04:37.866 "bdev_set_qd_sampling_period", 00:04:37.866 "bdev_get_bdevs", 00:04:37.866 "bdev_reset_iostat", 00:04:37.866 "bdev_get_iostat", 00:04:37.866 "bdev_examine", 00:04:37.866 "bdev_wait_for_examine", 00:04:37.866 "bdev_set_options", 00:04:37.866 "notify_get_notifications", 00:04:37.866 "notify_get_types", 00:04:37.866 "accel_get_stats", 00:04:37.866 "accel_set_options", 00:04:37.866 "accel_set_driver", 00:04:37.866 "accel_crypto_key_destroy", 00:04:37.866 "accel_crypto_keys_get", 00:04:37.866 "accel_crypto_key_create", 00:04:37.866 "accel_assign_opc", 00:04:37.866 "accel_get_module_info", 00:04:37.866 "accel_get_opc_assignments", 00:04:37.866 "vmd_rescan", 00:04:37.866 "vmd_remove_device", 00:04:37.866 "vmd_enable", 00:04:37.866 "sock_get_default_impl", 00:04:37.866 "sock_set_default_impl", 00:04:37.866 "sock_impl_set_options", 00:04:37.866 "sock_impl_get_options", 00:04:37.866 "iobuf_get_stats", 00:04:37.866 "iobuf_set_options", 00:04:37.866 "keyring_get_keys", 00:04:37.866 "framework_get_pci_devices", 00:04:37.866 "framework_get_config", 00:04:37.866 "framework_get_subsystems", 00:04:37.866 "vfu_tgt_set_base_path", 00:04:37.866 "trace_get_info", 00:04:37.866 "trace_get_tpoint_group_mask", 00:04:37.866 "trace_disable_tpoint_group", 00:04:37.866 "trace_enable_tpoint_group", 00:04:37.866 "trace_clear_tpoint_mask", 00:04:37.866 "trace_set_tpoint_mask", 00:04:37.866 "spdk_get_version", 00:04:37.867 "rpc_get_methods" 00:04:37.867 ] 00:04:37.867 10:17:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.867 10:17:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:37.867 10:17:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2189170 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2189170 ']' 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2189170 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2189170 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2189170' 00:04:37.867 killing process with pid 2189170 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2189170 00:04:37.867 10:17:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2189170 00:04:38.431 00:04:38.431 real 0m1.303s 00:04:38.431 user 0m2.280s 00:04:38.431 sys 0m0.433s 00:04:38.431 10:17:32 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.432 10:17:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.432 ************************************ 00:04:38.432 END TEST spdkcli_tcp 00:04:38.432 ************************************ 00:04:38.432 10:17:32 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.432 10:17:32 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.432 10:17:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.432 10:17:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.432 10:17:32 -- common/autotest_common.sh@10 -- # set +x 00:04:38.432 ************************************ 00:04:38.432 START TEST dpdk_mem_utility 00:04:38.432 ************************************ 00:04:38.432 10:17:32 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.432 * Looking for test storage... 00:04:38.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:38.432 10:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:38.432 10:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2189371 00:04:38.432 10:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.432 10:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2189371 00:04:38.432 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2189371 ']' 00:04:38.432 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.432 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.432 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.432 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.432 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.701 [2024-07-15 10:17:33.084818] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:38.701 [2024-07-15 10:17:33.084918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189371 ] 00:04:38.701 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.701 [2024-07-15 10:17:33.140943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.701 [2024-07-15 10:17:33.246229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.961 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.961 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:38.961 10:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:38.961 10:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:38.961 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.961 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.961 { 00:04:38.961 "filename": "/tmp/spdk_mem_dump.txt" 00:04:38.961 } 00:04:38.961 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.961 10:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:38.961 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:38.961 1 heaps totaling size 814.000000 MiB 00:04:38.961 size: 814.000000 MiB heap id: 0 00:04:38.961 end heaps---------- 00:04:38.961 8 mempools totaling size 598.116089 MiB 00:04:38.961 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:38.961 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:38.961 size: 84.521057 MiB name: bdev_io_2189371 00:04:38.961 size: 51.011292 MiB name: evtpool_2189371 00:04:38.961 size: 50.003479 MiB name: msgpool_2189371 00:04:38.961 size: 21.763794 MiB name: PDU_Pool 00:04:38.961 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:38.961 size: 0.026123 MiB name: Session_Pool 00:04:38.961 end mempools------- 00:04:38.961 6 memzones totaling size 4.142822 MiB 00:04:38.961 size: 1.000366 MiB name: RG_ring_0_2189371 00:04:38.961 size: 1.000366 MiB name: RG_ring_1_2189371 00:04:38.961 size: 1.000366 MiB name: RG_ring_4_2189371 00:04:38.961 size: 1.000366 MiB name: RG_ring_5_2189371 00:04:38.961 size: 0.125366 MiB name: RG_ring_2_2189371 00:04:38.961 size: 0.015991 MiB name: RG_ring_3_2189371 00:04:38.961 end memzones------- 00:04:38.961 10:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:39.219 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:39.219 list of free elements. size: 12.519348 MiB 00:04:39.219 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:39.219 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:39.219 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:39.219 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:39.219 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:39.219 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:39.219 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:39.219 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:39.219 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:39.219 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:39.219 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:39.219 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:39.219 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:39.219 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:39.219 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:39.219 list of standard malloc elements. size: 199.218079 MiB 00:04:39.219 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:39.219 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:39.219 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:39.219 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:39.219 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:39.219 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:39.219 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:39.219 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:39.219 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:39.219 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:39.219 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:39.219 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:39.219 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:39.219 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:39.219 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:39.219 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:39.219 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:39.219 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:39.219 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:39.219 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:39.219 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:39.219 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:39.219 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:39.219 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:39.219 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:39.220 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:39.220 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:39.220 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:39.220 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:39.220 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:39.220 list of memzone associated elements. size: 602.262573 MiB 00:04:39.220 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:39.220 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:39.220 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:39.220 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:39.220 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:39.220 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2189371_0 00:04:39.220 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:39.220 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2189371_0 00:04:39.220 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:39.220 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2189371_0 00:04:39.220 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:39.220 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:39.220 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:39.220 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:39.220 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:39.220 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2189371 00:04:39.220 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:39.220 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2189371 00:04:39.220 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:39.220 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2189371 00:04:39.220 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:39.220 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:39.220 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:39.220 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:39.220 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:39.220 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:39.220 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:39.220 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:39.220 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:39.220 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2189371 00:04:39.220 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:39.220 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2189371 00:04:39.220 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:39.220 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2189371 00:04:39.220 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:39.220 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2189371 00:04:39.220 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:39.220 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2189371 00:04:39.220 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:39.220 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:39.220 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:39.220 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:39.220 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:39.220 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:39.220 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:39.220 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2189371 00:04:39.220 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:39.220 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:39.220 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:39.220 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:39.220 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:39.220 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2189371 00:04:39.220 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:39.220 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:39.220 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:39.220 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2189371 00:04:39.220 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:39.220 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2189371 00:04:39.220 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:39.220 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:39.220 10:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:39.220 10:17:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2189371 00:04:39.220 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2189371 ']' 00:04:39.220 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2189371 00:04:39.220 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:39.220 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.220 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2189371 00:04:39.220 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.220 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.220 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2189371' 00:04:39.220 killing process with pid 2189371 00:04:39.220 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2189371 00:04:39.220 10:17:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2189371 00:04:39.477 00:04:39.477 real 0m1.126s 00:04:39.477 user 0m1.087s 00:04:39.477 sys 0m0.407s 00:04:39.477 10:17:34 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.477 10:17:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.477 ************************************ 00:04:39.477 END TEST dpdk_mem_utility 00:04:39.477 ************************************ 00:04:39.736 10:17:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:39.736 10:17:34 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:39.736 10:17:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.736 10:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.736 10:17:34 -- common/autotest_common.sh@10 -- # set +x 00:04:39.736 ************************************ 00:04:39.736 START TEST event 00:04:39.736 ************************************ 00:04:39.736 10:17:34 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:39.736 * Looking for test storage... 00:04:39.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:39.736 10:17:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:39.736 10:17:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:39.736 10:17:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:39.736 10:17:34 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:39.736 10:17:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.736 10:17:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.736 ************************************ 00:04:39.736 START TEST event_perf 00:04:39.736 ************************************ 00:04:39.736 10:17:34 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:39.736 Running I/O for 1 seconds...[2024-07-15 10:17:34.243008] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:39.736 [2024-07-15 10:17:34.243067] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189561 ] 00:04:39.736 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.736 [2024-07-15 10:17:34.303796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:39.993 [2024-07-15 10:17:34.424896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.993 [2024-07-15 10:17:34.424929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.993 [2024-07-15 10:17:34.425048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:39.993 [2024-07-15 10:17:34.425051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.924 Running I/O for 1 seconds... 00:04:40.924 lcore 0: 232172 00:04:40.924 lcore 1: 232170 00:04:40.924 lcore 2: 232170 00:04:40.924 lcore 3: 232171 00:04:40.924 done. 00:04:40.924 00:04:40.924 real 0m1.316s 00:04:40.924 user 0m4.224s 00:04:40.924 sys 0m0.087s 00:04:40.924 10:17:35 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.924 10:17:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.924 ************************************ 00:04:40.924 END TEST event_perf 00:04:40.924 ************************************ 00:04:40.924 10:17:35 event -- common/autotest_common.sh@1142 -- # return 0 00:04:40.924 10:17:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:40.924 10:17:35 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:40.924 10:17:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.924 10:17:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.181 ************************************ 00:04:41.181 START TEST event_reactor 00:04:41.181 ************************************ 00:04:41.181 10:17:35 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:41.181 [2024-07-15 10:17:35.605329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:41.181 [2024-07-15 10:17:35.605394] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189722 ] 00:04:41.181 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.181 [2024-07-15 10:17:35.665686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.181 [2024-07-15 10:17:35.783007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.553 test_start 00:04:42.553 oneshot 00:04:42.553 tick 100 00:04:42.553 tick 100 00:04:42.553 tick 250 00:04:42.553 tick 100 00:04:42.553 tick 100 00:04:42.553 tick 100 00:04:42.553 tick 250 00:04:42.553 tick 500 00:04:42.553 tick 100 00:04:42.553 tick 100 00:04:42.553 tick 250 00:04:42.553 tick 100 00:04:42.553 tick 100 00:04:42.553 test_end 00:04:42.553 00:04:42.553 real 0m1.305s 00:04:42.553 user 0m1.219s 00:04:42.553 sys 0m0.082s 00:04:42.553 10:17:36 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.553 10:17:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:42.553 ************************************ 00:04:42.553 END TEST event_reactor 00:04:42.553 ************************************ 00:04:42.553 10:17:36 event -- common/autotest_common.sh@1142 -- # return 0 00:04:42.553 10:17:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:42.554 10:17:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:42.554 10:17:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.554 10:17:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.554 ************************************ 00:04:42.554 START TEST event_reactor_perf 00:04:42.554 ************************************ 00:04:42.554 10:17:36 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:42.554 [2024-07-15 10:17:36.962614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:42.554 [2024-07-15 10:17:36.962680] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189995 ] 00:04:42.554 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.554 [2024-07-15 10:17:37.025422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.554 [2024-07-15 10:17:37.146301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.926 test_start 00:04:43.926 test_end 00:04:43.926 Performance: 356419 events per second 00:04:43.926 00:04:43.926 real 0m1.319s 00:04:43.926 user 0m1.232s 00:04:43.926 sys 0m0.081s 00:04:43.926 10:17:38 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.926 10:17:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:43.926 ************************************ 00:04:43.926 END TEST event_reactor_perf 00:04:43.926 ************************************ 00:04:43.926 10:17:38 event -- common/autotest_common.sh@1142 -- # return 0 00:04:43.926 10:17:38 event -- event/event.sh@49 -- # uname -s 00:04:43.926 10:17:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:43.926 10:17:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:43.926 10:17:38 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.926 10:17:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.926 10:17:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.926 ************************************ 00:04:43.926 START TEST event_scheduler 00:04:43.926 ************************************ 00:04:43.926 10:17:38 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:43.926 * Looking for test storage... 00:04:43.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:43.926 10:17:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:43.926 10:17:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2190174 00:04:43.926 10:17:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:43.926 10:17:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.926 10:17:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2190174 00:04:43.926 10:17:38 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2190174 ']' 00:04:43.926 10:17:38 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.926 10:17:38 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.926 10:17:38 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.926 10:17:38 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.926 10:17:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.926 [2024-07-15 10:17:38.417290] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:43.926 [2024-07-15 10:17:38.417374] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190174 ] 00:04:43.926 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.926 [2024-07-15 10:17:38.475622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.184 [2024-07-15 10:17:38.586938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.184 [2024-07-15 10:17:38.586993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.184 [2024-07-15 10:17:38.587059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.184 [2024-07-15 10:17:38.587062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:44.184 10:17:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 [2024-07-15 10:17:38.631806] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:44.184 [2024-07-15 10:17:38.631831] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:44.184 [2024-07-15 10:17:38.631847] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:44.184 [2024-07-15 10:17:38.631873] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:44.184 [2024-07-15 10:17:38.631893] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.184 10:17:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 [2024-07-15 10:17:38.728350] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.184 10:17:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 ************************************ 00:04:44.184 START TEST scheduler_create_thread 00:04:44.184 ************************************ 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 2 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 3 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 4 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 5 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 6 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 7 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 8 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 9 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.184 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.442 10 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.442 10:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.373 10:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.373 00:04:45.373 real 0m1.173s 00:04:45.373 user 0m0.013s 00:04:45.373 sys 0m0.002s 00:04:45.373 10:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.373 10:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.373 ************************************ 00:04:45.373 END TEST scheduler_create_thread 00:04:45.373 ************************************ 00:04:45.373 10:17:39 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:45.373 10:17:39 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:45.373 10:17:39 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2190174 00:04:45.373 10:17:39 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2190174 ']' 00:04:45.373 10:17:39 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2190174 00:04:45.373 10:17:39 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:45.374 10:17:39 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.374 10:17:39 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2190174 00:04:45.374 10:17:39 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:45.374 10:17:39 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:45.374 10:17:39 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2190174' 00:04:45.374 killing process with pid 2190174 00:04:45.374 10:17:39 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2190174 00:04:45.374 10:17:39 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2190174 00:04:45.938 [2024-07-15 10:17:40.410204] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:46.196 00:04:46.196 real 0m2.349s 00:04:46.196 user 0m2.681s 00:04:46.196 sys 0m0.337s 00:04:46.196 10:17:40 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.196 10:17:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.196 ************************************ 00:04:46.196 END TEST event_scheduler 00:04:46.196 ************************************ 00:04:46.196 10:17:40 event -- common/autotest_common.sh@1142 -- # return 0 00:04:46.196 10:17:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:46.196 10:17:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:46.196 10:17:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.196 10:17:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.196 10:17:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.196 ************************************ 00:04:46.196 START TEST app_repeat 00:04:46.196 ************************************ 00:04:46.196 10:17:40 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2190496 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2190496' 00:04:46.196 Process app_repeat pid: 2190496 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:46.196 spdk_app_start Round 0 00:04:46.196 10:17:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2190496 /var/tmp/spdk-nbd.sock 00:04:46.196 10:17:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2190496 ']' 00:04:46.196 10:17:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.196 10:17:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.196 10:17:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.196 10:17:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.196 10:17:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.196 [2024-07-15 10:17:40.749761] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:46.196 [2024-07-15 10:17:40.749829] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190496 ] 00:04:46.196 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.196 [2024-07-15 10:17:40.813322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.455 [2024-07-15 10:17:40.930792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.455 [2024-07-15 10:17:40.930798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.455 10:17:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.455 10:17:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:46.455 10:17:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.712 Malloc0 00:04:46.712 10:17:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.970 Malloc1 00:04:46.970 10:17:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.970 10:17:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.227 /dev/nbd0 00:04:47.228 10:17:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.228 10:17:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.228 1+0 records in 00:04:47.228 1+0 records out 00:04:47.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227381 s, 18.0 MB/s 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:47.228 10:17:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:47.228 10:17:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.228 10:17:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.228 10:17:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.485 /dev/nbd1 00:04:47.485 10:17:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:47.485 10:17:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:47.485 10:17:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:47.485 10:17:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:47.485 10:17:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:47.485 10:17:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:47.485 10:17:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:47.485 10:17:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:47.485 10:17:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:47.485 10:17:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:47.485 10:17:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.743 1+0 records in 00:04:47.743 1+0 records out 00:04:47.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237416 s, 17.3 MB/s 00:04:47.743 10:17:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.743 10:17:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:47.743 10:17:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.743 10:17:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:47.743 10:17:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:47.743 10:17:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.743 10:17:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.743 10:17:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.743 10:17:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.743 10:17:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.001 { 00:04:48.001 "nbd_device": "/dev/nbd0", 00:04:48.001 "bdev_name": "Malloc0" 00:04:48.001 }, 00:04:48.001 { 00:04:48.001 "nbd_device": "/dev/nbd1", 00:04:48.001 "bdev_name": "Malloc1" 00:04:48.001 } 00:04:48.001 ]' 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.001 { 00:04:48.001 "nbd_device": "/dev/nbd0", 00:04:48.001 "bdev_name": "Malloc0" 00:04:48.001 }, 00:04:48.001 { 00:04:48.001 "nbd_device": "/dev/nbd1", 00:04:48.001 "bdev_name": "Malloc1" 00:04:48.001 } 00:04:48.001 ]' 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.001 /dev/nbd1' 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:48.001 /dev/nbd1' 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:48.001 256+0 records in 00:04:48.001 256+0 records out 00:04:48.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508083 s, 206 MB/s 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:48.001 256+0 records in 00:04:48.001 256+0 records out 00:04:48.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021751 s, 48.2 MB/s 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:48.001 256+0 records in 00:04:48.001 256+0 records out 00:04:48.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023413 s, 44.8 MB/s 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.001 10:17:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.259 10:17:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.259 10:17:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.259 10:17:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.259 10:17:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.259 10:17:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.259 10:17:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.259 10:17:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.259 10:17:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.259 10:17:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.259 10:17:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.516 10:17:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.774 10:17:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.774 10:17:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.030 10:17:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.287 [2024-07-15 10:17:43.869189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.592 [2024-07-15 10:17:43.985912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.592 [2024-07-15 10:17:43.985912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.592 [2024-07-15 10:17:44.047513] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.592 [2024-07-15 10:17:44.047594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.117 10:17:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.117 10:17:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:52.117 spdk_app_start Round 1 00:04:52.117 10:17:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2190496 /var/tmp/spdk-nbd.sock 00:04:52.117 10:17:46 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2190496 ']' 00:04:52.117 10:17:46 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.117 10:17:46 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.117 10:17:46 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.117 10:17:46 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.117 10:17:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.374 10:17:46 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.374 10:17:46 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:52.374 10:17:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.631 Malloc0 00:04:52.631 10:17:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.890 Malloc1 00:04:52.890 10:17:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.890 10:17:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.147 /dev/nbd0 00:04:53.147 10:17:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.147 10:17:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.147 10:17:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:53.147 10:17:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.147 10:17:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.147 10:17:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.147 10:17:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:53.147 10:17:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.147 10:17:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.147 10:17:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.147 10:17:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.147 1+0 records in 00:04:53.147 1+0 records out 00:04:53.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016876 s, 24.3 MB/s 00:04:53.147 10:17:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.148 10:17:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.148 10:17:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.148 10:17:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.148 10:17:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.148 10:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.148 10:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.148 10:17:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.405 /dev/nbd1 00:04:53.405 10:17:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.405 10:17:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.405 1+0 records in 00:04:53.405 1+0 records out 00:04:53.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177118 s, 23.1 MB/s 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.405 10:17:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.405 10:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.405 10:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.405 10:17:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.405 10:17:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.405 10:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.663 { 00:04:53.663 "nbd_device": "/dev/nbd0", 00:04:53.663 "bdev_name": "Malloc0" 00:04:53.663 }, 00:04:53.663 { 00:04:53.663 "nbd_device": "/dev/nbd1", 00:04:53.663 "bdev_name": "Malloc1" 00:04:53.663 } 00:04:53.663 ]' 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.663 { 00:04:53.663 "nbd_device": "/dev/nbd0", 00:04:53.663 "bdev_name": "Malloc0" 00:04:53.663 }, 00:04:53.663 { 00:04:53.663 "nbd_device": "/dev/nbd1", 00:04:53.663 "bdev_name": "Malloc1" 00:04:53.663 } 00:04:53.663 ]' 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.663 /dev/nbd1' 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.663 /dev/nbd1' 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.663 256+0 records in 00:04:53.663 256+0 records out 00:04:53.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382843 s, 274 MB/s 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.663 256+0 records in 00:04:53.663 256+0 records out 00:04:53.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211972 s, 49.5 MB/s 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.663 256+0 records in 00:04:53.663 256+0 records out 00:04:53.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250821 s, 41.8 MB/s 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.663 10:17:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.921 10:17:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.178 10:17:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.178 10:17:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.178 10:17:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.178 10:17:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.178 10:17:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.178 10:17:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.178 10:17:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.178 10:17:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.178 10:17:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.435 10:17:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.692 10:17:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.692 10:17:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.949 10:17:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.207 [2024-07-15 10:17:49.698970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.207 [2024-07-15 10:17:49.814319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.207 [2024-07-15 10:17:49.814323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.464 [2024-07-15 10:17:49.877281] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.464 [2024-07-15 10:17:49.877360] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.989 10:17:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.989 10:17:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:57.989 spdk_app_start Round 2 00:04:57.989 10:17:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2190496 /var/tmp/spdk-nbd.sock 00:04:57.989 10:17:52 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2190496 ']' 00:04:57.989 10:17:52 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.989 10:17:52 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.990 10:17:52 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.990 10:17:52 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.990 10:17:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.246 10:17:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.246 10:17:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:58.246 10:17:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.503 Malloc0 00:04:58.503 10:17:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.762 Malloc1 00:04:58.762 10:17:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.762 10:17:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.020 /dev/nbd0 00:04:59.020 10:17:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.020 10:17:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.020 1+0 records in 00:04:59.020 1+0 records out 00:04:59.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176289 s, 23.2 MB/s 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.020 10:17:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.020 10:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.020 10:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.020 10:17:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.277 /dev/nbd1 00:04:59.277 10:17:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.277 10:17:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.277 1+0 records in 00:04:59.277 1+0 records out 00:04:59.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200275 s, 20.5 MB/s 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.277 10:17:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.277 10:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.277 10:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.277 10:17:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.277 10:17:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.277 10:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.535 10:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.535 { 00:04:59.535 "nbd_device": "/dev/nbd0", 00:04:59.535 "bdev_name": "Malloc0" 00:04:59.535 }, 00:04:59.535 { 00:04:59.535 "nbd_device": "/dev/nbd1", 00:04:59.535 "bdev_name": "Malloc1" 00:04:59.535 } 00:04:59.535 ]' 00:04:59.535 10:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.535 { 00:04:59.535 "nbd_device": "/dev/nbd0", 00:04:59.535 "bdev_name": "Malloc0" 00:04:59.535 }, 00:04:59.535 { 00:04:59.535 "nbd_device": "/dev/nbd1", 00:04:59.535 "bdev_name": "Malloc1" 00:04:59.535 } 00:04:59.535 ]' 00:04:59.535 10:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.535 10:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.535 /dev/nbd1' 00:04:59.535 10:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.535 /dev/nbd1' 00:04:59.535 10:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.536 256+0 records in 00:04:59.536 256+0 records out 00:04:59.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497753 s, 211 MB/s 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.536 256+0 records in 00:04:59.536 256+0 records out 00:04:59.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242543 s, 43.2 MB/s 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.536 256+0 records in 00:04:59.536 256+0 records out 00:04:59.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223173 s, 47.0 MB/s 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.536 10:17:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.794 10:17:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.794 10:17:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.794 10:17:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.794 10:17:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.794 10:17:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.794 10:17:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.794 10:17:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.794 10:17:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.794 10:17:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.794 10:17:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.051 10:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.308 10:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.308 10:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.308 10:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.566 10:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.566 10:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.566 10:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.566 10:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.566 10:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.566 10:17:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.566 10:17:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.566 10:17:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.566 10:17:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.566 10:17:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.823 10:17:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.080 [2024-07-15 10:17:55.514853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.080 [2024-07-15 10:17:55.632066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.080 [2024-07-15 10:17:55.632066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.080 [2024-07-15 10:17:55.692949] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.080 [2024-07-15 10:17:55.693013] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.603 10:17:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2190496 /var/tmp/spdk-nbd.sock 00:05:03.603 10:17:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2190496 ']' 00:05:03.603 10:17:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.603 10:17:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.603 10:17:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.603 10:17:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.603 10:17:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.860 10:17:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.860 10:17:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:03.860 10:17:58 event.app_repeat -- event/event.sh@39 -- # killprocess 2190496 00:05:03.860 10:17:58 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2190496 ']' 00:05:03.860 10:17:58 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2190496 00:05:03.860 10:17:58 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:03.860 10:17:58 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.860 10:17:58 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2190496 00:05:04.118 10:17:58 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.118 10:17:58 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.118 10:17:58 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2190496' 00:05:04.118 killing process with pid 2190496 00:05:04.118 10:17:58 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2190496 00:05:04.118 10:17:58 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2190496 00:05:04.118 spdk_app_start is called in Round 0. 00:05:04.118 Shutdown signal received, stop current app iteration 00:05:04.118 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:04.118 spdk_app_start is called in Round 1. 00:05:04.118 Shutdown signal received, stop current app iteration 00:05:04.118 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:04.118 spdk_app_start is called in Round 2. 00:05:04.118 Shutdown signal received, stop current app iteration 00:05:04.118 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:04.118 spdk_app_start is called in Round 3. 00:05:04.118 Shutdown signal received, stop current app iteration 00:05:04.376 10:17:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:04.376 10:17:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:04.376 00:05:04.376 real 0m18.048s 00:05:04.376 user 0m39.118s 00:05:04.376 sys 0m3.110s 00:05:04.376 10:17:58 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.376 10:17:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.376 ************************************ 00:05:04.376 END TEST app_repeat 00:05:04.376 ************************************ 00:05:04.376 10:17:58 event -- common/autotest_common.sh@1142 -- # return 0 00:05:04.376 10:17:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:04.376 10:17:58 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:04.376 10:17:58 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.376 10:17:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.376 10:17:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.376 ************************************ 00:05:04.376 START TEST cpu_locks 00:05:04.376 ************************************ 00:05:04.376 10:17:58 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:04.376 * Looking for test storage... 00:05:04.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:04.376 10:17:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:04.376 10:17:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:04.376 10:17:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:04.376 10:17:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:04.376 10:17:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.376 10:17:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.376 10:17:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.376 ************************************ 00:05:04.376 START TEST default_locks 00:05:04.376 ************************************ 00:05:04.376 10:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:04.376 10:17:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2192848 00:05:04.376 10:17:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.376 10:17:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2192848 00:05:04.376 10:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2192848 ']' 00:05:04.376 10:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.376 10:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.376 10:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.376 10:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.376 10:17:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.376 [2024-07-15 10:17:58.948786] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:04.376 [2024-07-15 10:17:58.948881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192848 ] 00:05:04.376 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.376 [2024-07-15 10:17:59.005939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.634 [2024-07-15 10:17:59.112906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.891 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.891 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:04.891 10:17:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2192848 00:05:04.891 10:17:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2192848 00:05:04.891 10:17:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.454 lslocks: write error 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2192848 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2192848 ']' 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2192848 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2192848 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2192848' 00:05:05.454 killing process with pid 2192848 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2192848 00:05:05.454 10:17:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2192848 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2192848 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2192848 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2192848 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2192848 ']' 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2192848) - No such process 00:05:05.711 ERROR: process (pid: 2192848) is no longer running 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:05.711 00:05:05.711 real 0m1.439s 00:05:05.711 user 0m1.393s 00:05:05.711 sys 0m0.559s 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.711 10:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.711 ************************************ 00:05:05.711 END TEST default_locks 00:05:05.711 ************************************ 00:05:05.968 10:18:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:05.968 10:18:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:05.968 10:18:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.968 10:18:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.968 10:18:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.969 ************************************ 00:05:05.969 START TEST default_locks_via_rpc 00:05:05.969 ************************************ 00:05:05.969 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:05.969 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2193047 00:05:05.969 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.969 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2193047 00:05:05.969 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2193047 ']' 00:05:05.969 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.969 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.969 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.969 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.969 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.969 [2024-07-15 10:18:00.438726] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:05.969 [2024-07-15 10:18:00.438821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193047 ] 00:05:05.969 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.969 [2024-07-15 10:18:00.505566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.227 [2024-07-15 10:18:00.623623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2193047 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2193047 00:05:06.502 10:18:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.761 10:18:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2193047 00:05:06.761 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2193047 ']' 00:05:06.761 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2193047 00:05:06.761 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:06.761 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.762 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2193047 00:05:06.762 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.762 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.762 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2193047' 00:05:06.762 killing process with pid 2193047 00:05:06.762 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2193047 00:05:06.762 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2193047 00:05:07.325 00:05:07.325 real 0m1.290s 00:05:07.325 user 0m1.234s 00:05:07.325 sys 0m0.528s 00:05:07.325 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.325 10:18:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.325 ************************************ 00:05:07.325 END TEST default_locks_via_rpc 00:05:07.325 ************************************ 00:05:07.325 10:18:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:07.325 10:18:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:07.325 10:18:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.325 10:18:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.325 10:18:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.325 ************************************ 00:05:07.325 START TEST non_locking_app_on_locked_coremask 00:05:07.325 ************************************ 00:05:07.325 10:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:07.325 10:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2193405 00:05:07.325 10:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.325 10:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2193405 /var/tmp/spdk.sock 00:05:07.325 10:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2193405 ']' 00:05:07.325 10:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.325 10:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.325 10:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.325 10:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.325 10:18:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.325 [2024-07-15 10:18:01.772506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:07.325 [2024-07-15 10:18:01.772606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193405 ] 00:05:07.325 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.325 [2024-07-15 10:18:01.829481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.325 [2024-07-15 10:18:01.936471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2193422 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2193422 /var/tmp/spdk2.sock 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2193422 ']' 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.603 10:18:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.603 [2024-07-15 10:18:02.247908] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:07.603 [2024-07-15 10:18:02.248018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193422 ] 00:05:07.860 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.860 [2024-07-15 10:18:02.338556] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:07.860 [2024-07-15 10:18:02.338588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.117 [2024-07-15 10:18:02.577300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.681 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.681 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:08.681 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2193405 00:05:08.681 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2193405 00:05:08.681 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.938 lslocks: write error 00:05:08.938 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2193405 00:05:08.938 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2193405 ']' 00:05:08.938 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2193405 00:05:08.938 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:08.938 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.939 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2193405 00:05:08.939 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.939 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.939 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2193405' 00:05:08.939 killing process with pid 2193405 00:05:08.939 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2193405 00:05:08.939 10:18:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2193405 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2193422 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2193422 ']' 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2193422 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2193422 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2193422' 00:05:09.871 killing process with pid 2193422 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2193422 00:05:09.871 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2193422 00:05:10.435 00:05:10.435 real 0m3.229s 00:05:10.435 user 0m3.376s 00:05:10.435 sys 0m1.016s 00:05:10.435 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.435 10:18:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.435 ************************************ 00:05:10.435 END TEST non_locking_app_on_locked_coremask 00:05:10.435 ************************************ 00:05:10.435 10:18:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:10.435 10:18:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:10.435 10:18:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.435 10:18:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.435 10:18:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.435 ************************************ 00:05:10.435 START TEST locking_app_on_unlocked_coremask 00:05:10.435 ************************************ 00:05:10.435 10:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:10.435 10:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2193785 00:05:10.435 10:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:10.435 10:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2193785 /var/tmp/spdk.sock 00:05:10.435 10:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2193785 ']' 00:05:10.435 10:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.435 10:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.435 10:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.435 10:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.435 10:18:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.435 [2024-07-15 10:18:05.047293] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:10.435 [2024-07-15 10:18:05.047382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193785 ] 00:05:10.435 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.693 [2024-07-15 10:18:05.105781] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.693 [2024-07-15 10:18:05.105820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.693 [2024-07-15 10:18:05.214142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2193857 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2193857 /var/tmp/spdk2.sock 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2193857 ']' 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.951 10:18:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.951 [2024-07-15 10:18:05.520445] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:10.951 [2024-07-15 10:18:05.520536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193857 ] 00:05:10.951 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.207 [2024-07-15 10:18:05.611770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.207 [2024-07-15 10:18:05.850809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.138 10:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.138 10:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:12.138 10:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2193857 00:05:12.138 10:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2193857 00:05:12.138 10:18:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.700 lslocks: write error 00:05:12.700 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2193785 00:05:12.700 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2193785 ']' 00:05:12.700 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2193785 00:05:12.700 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.700 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.701 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2193785 00:05:12.701 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.701 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.701 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2193785' 00:05:12.701 killing process with pid 2193785 00:05:12.701 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2193785 00:05:12.701 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2193785 00:05:13.632 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2193857 00:05:13.632 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2193857 ']' 00:05:13.632 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2193857 00:05:13.632 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:13.632 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.632 10:18:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2193857 00:05:13.632 10:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.632 10:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.632 10:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2193857' 00:05:13.632 killing process with pid 2193857 00:05:13.632 10:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2193857 00:05:13.632 10:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2193857 00:05:13.890 00:05:13.890 real 0m3.476s 00:05:13.890 user 0m3.641s 00:05:13.890 sys 0m1.068s 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.890 ************************************ 00:05:13.890 END TEST locking_app_on_unlocked_coremask 00:05:13.890 ************************************ 00:05:13.890 10:18:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:13.890 10:18:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:13.890 10:18:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.890 10:18:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.890 10:18:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.890 ************************************ 00:05:13.890 START TEST locking_app_on_locked_coremask 00:05:13.890 ************************************ 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2194664 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2194664 /var/tmp/spdk.sock 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2194664 ']' 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.890 10:18:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.148 [2024-07-15 10:18:08.573930] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:14.148 [2024-07-15 10:18:08.574015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194664 ] 00:05:14.148 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.148 [2024-07-15 10:18:08.637939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.148 [2024-07-15 10:18:08.758320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2194781 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2194781 /var/tmp/spdk2.sock 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2194781 /var/tmp/spdk2.sock 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2194781 /var/tmp/spdk2.sock 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2194781 ']' 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.405 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.662 [2024-07-15 10:18:09.070394] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:14.662 [2024-07-15 10:18:09.070487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194781 ] 00:05:14.662 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.662 [2024-07-15 10:18:09.155547] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2194664 has claimed it. 00:05:14.662 [2024-07-15 10:18:09.155616] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:15.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2194781) - No such process 00:05:15.224 ERROR: process (pid: 2194781) is no longer running 00:05:15.224 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.224 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:15.224 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:15.224 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.224 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.224 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.224 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2194664 00:05:15.224 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2194664 00:05:15.224 10:18:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.788 lslocks: write error 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2194664 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2194664 ']' 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2194664 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2194664 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2194664' 00:05:15.788 killing process with pid 2194664 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2194664 00:05:15.788 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2194664 00:05:16.351 00:05:16.351 real 0m2.242s 00:05:16.351 user 0m2.398s 00:05:16.351 sys 0m0.680s 00:05:16.351 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.351 10:18:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.351 ************************************ 00:05:16.351 END TEST locking_app_on_locked_coremask 00:05:16.351 ************************************ 00:05:16.351 10:18:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:16.351 10:18:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:16.351 10:18:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.351 10:18:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.351 10:18:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.351 ************************************ 00:05:16.351 START TEST locking_overlapped_coremask 00:05:16.351 ************************************ 00:05:16.351 10:18:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:16.351 10:18:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2195089 00:05:16.351 10:18:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:16.351 10:18:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2195089 /var/tmp/spdk.sock 00:05:16.351 10:18:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2195089 ']' 00:05:16.351 10:18:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.351 10:18:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.351 10:18:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.351 10:18:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.351 10:18:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.351 [2024-07-15 10:18:10.864438] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:16.352 [2024-07-15 10:18:10.864527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195089 ] 00:05:16.352 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.352 [2024-07-15 10:18:10.927178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.608 [2024-07-15 10:18:11.049217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.608 [2024-07-15 10:18:11.049267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.608 [2024-07-15 10:18:11.049284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2195179 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2195179 /var/tmp/spdk2.sock 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2195179 /var/tmp/spdk2.sock 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2195179 /var/tmp/spdk2.sock 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2195179 ']' 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.175 10:18:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.432 [2024-07-15 10:18:11.870767] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:17.432 [2024-07-15 10:18:11.870853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195179 ] 00:05:17.432 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.432 [2024-07-15 10:18:11.959431] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2195089 has claimed it. 00:05:17.432 [2024-07-15 10:18:11.959482] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:17.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2195179) - No such process 00:05:17.996 ERROR: process (pid: 2195179) is no longer running 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2195089 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2195089 ']' 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2195089 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2195089 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2195089' 00:05:17.996 killing process with pid 2195089 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2195089 00:05:17.996 10:18:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2195089 00:05:18.561 00:05:18.561 real 0m2.236s 00:05:18.561 user 0m6.248s 00:05:18.561 sys 0m0.513s 00:05:18.561 10:18:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.561 10:18:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.561 ************************************ 00:05:18.561 END TEST locking_overlapped_coremask 00:05:18.561 ************************************ 00:05:18.561 10:18:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:18.561 10:18:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:18.561 10:18:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.561 10:18:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.561 10:18:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.561 ************************************ 00:05:18.561 START TEST locking_overlapped_coremask_via_rpc 00:05:18.561 ************************************ 00:05:18.561 10:18:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:18.561 10:18:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2195391 00:05:18.561 10:18:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:18.561 10:18:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2195391 /var/tmp/spdk.sock 00:05:18.561 10:18:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2195391 ']' 00:05:18.561 10:18:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.561 10:18:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.562 10:18:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.562 10:18:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.562 10:18:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.562 [2024-07-15 10:18:13.152225] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:18.562 [2024-07-15 10:18:13.152316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195391 ] 00:05:18.562 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.820 [2024-07-15 10:18:13.213025] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.820 [2024-07-15 10:18:13.213065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:18.820 [2024-07-15 10:18:13.335823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.820 [2024-07-15 10:18:13.335907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.820 [2024-07-15 10:18:13.335912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2195508 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2195508 /var/tmp/spdk2.sock 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2195508 ']' 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.833 10:18:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.833 [2024-07-15 10:18:14.143364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:19.833 [2024-07-15 10:18:14.143450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195508 ] 00:05:19.833 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.833 [2024-07-15 10:18:14.231755] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.833 [2024-07-15 10:18:14.231797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.833 [2024-07-15 10:18:14.447965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.833 [2024-07-15 10:18:14.451933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:19.833 [2024-07-15 10:18:14.451936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.765 [2024-07-15 10:18:15.086980] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2195391 has claimed it. 00:05:20.765 request: 00:05:20.765 { 00:05:20.765 "method": "framework_enable_cpumask_locks", 00:05:20.765 "req_id": 1 00:05:20.765 } 00:05:20.765 Got JSON-RPC error response 00:05:20.765 response: 00:05:20.765 { 00:05:20.765 "code": -32603, 00:05:20.765 "message": "Failed to claim CPU core: 2" 00:05:20.765 } 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2195391 /var/tmp/spdk.sock 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2195391 ']' 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2195508 /var/tmp/spdk2.sock 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2195508 ']' 00:05:20.765 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.766 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.766 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.766 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.766 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.023 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.023 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:21.023 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:21.023 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:21.023 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:21.023 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:21.023 00:05:21.023 real 0m2.513s 00:05:21.023 user 0m1.245s 00:05:21.023 sys 0m0.197s 00:05:21.023 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.023 10:18:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.023 ************************************ 00:05:21.023 END TEST locking_overlapped_coremask_via_rpc 00:05:21.023 ************************************ 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:21.023 10:18:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:21.023 10:18:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2195391 ]] 00:05:21.023 10:18:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2195391 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2195391 ']' 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2195391 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2195391 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2195391' 00:05:21.023 killing process with pid 2195391 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2195391 00:05:21.023 10:18:15 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2195391 00:05:21.587 10:18:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2195508 ]] 00:05:21.587 10:18:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2195508 00:05:21.587 10:18:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2195508 ']' 00:05:21.587 10:18:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2195508 00:05:21.587 10:18:16 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:21.587 10:18:16 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.587 10:18:16 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2195508 00:05:21.587 10:18:16 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:21.587 10:18:16 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:21.587 10:18:16 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2195508' 00:05:21.587 killing process with pid 2195508 00:05:21.587 10:18:16 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2195508 00:05:21.587 10:18:16 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2195508 00:05:22.152 10:18:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:22.152 10:18:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:22.152 10:18:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2195391 ]] 00:05:22.152 10:18:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2195391 00:05:22.152 10:18:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2195391 ']' 00:05:22.152 10:18:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2195391 00:05:22.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2195391) - No such process 00:05:22.152 10:18:16 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2195391 is not found' 00:05:22.152 Process with pid 2195391 is not found 00:05:22.152 10:18:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2195508 ]] 00:05:22.152 10:18:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2195508 00:05:22.152 10:18:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2195508 ']' 00:05:22.152 10:18:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2195508 00:05:22.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2195508) - No such process 00:05:22.152 10:18:16 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2195508 is not found' 00:05:22.152 Process with pid 2195508 is not found 00:05:22.152 10:18:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:22.152 00:05:22.152 real 0m17.779s 00:05:22.152 user 0m32.103s 00:05:22.152 sys 0m5.468s 00:05:22.152 10:18:16 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.152 10:18:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.152 ************************************ 00:05:22.152 END TEST cpu_locks 00:05:22.152 ************************************ 00:05:22.152 10:18:16 event -- common/autotest_common.sh@1142 -- # return 0 00:05:22.152 00:05:22.152 real 0m42.468s 00:05:22.152 user 1m20.718s 00:05:22.152 sys 0m9.400s 00:05:22.152 10:18:16 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.152 10:18:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.152 ************************************ 00:05:22.153 END TEST event 00:05:22.153 ************************************ 00:05:22.153 10:18:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.153 10:18:16 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:22.153 10:18:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.153 10:18:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.153 10:18:16 -- common/autotest_common.sh@10 -- # set +x 00:05:22.153 ************************************ 00:05:22.153 START TEST thread 00:05:22.153 ************************************ 00:05:22.153 10:18:16 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:22.153 * Looking for test storage... 00:05:22.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:22.153 10:18:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:22.153 10:18:16 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:22.153 10:18:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.153 10:18:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.153 ************************************ 00:05:22.153 START TEST thread_poller_perf 00:05:22.153 ************************************ 00:05:22.153 10:18:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:22.153 [2024-07-15 10:18:16.752833] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:22.153 [2024-07-15 10:18:16.752916] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195904 ] 00:05:22.153 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.411 [2024-07-15 10:18:16.816980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.411 [2024-07-15 10:18:16.937126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.411 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:23.782 ====================================== 00:05:23.782 busy:2709974138 (cyc) 00:05:23.782 total_run_count: 299000 00:05:23.782 tsc_hz: 2700000000 (cyc) 00:05:23.782 ====================================== 00:05:23.782 poller_cost: 9063 (cyc), 3356 (nsec) 00:05:23.782 00:05:23.782 real 0m1.327s 00:05:23.782 user 0m1.239s 00:05:23.782 sys 0m0.083s 00:05:23.782 10:18:18 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.782 10:18:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.782 ************************************ 00:05:23.782 END TEST thread_poller_perf 00:05:23.782 ************************************ 00:05:23.782 10:18:18 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:23.782 10:18:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:23.782 10:18:18 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:23.782 10:18:18 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.782 10:18:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.782 ************************************ 00:05:23.782 START TEST thread_poller_perf 00:05:23.782 ************************************ 00:05:23.782 10:18:18 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:23.782 [2024-07-15 10:18:18.131969] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:23.782 [2024-07-15 10:18:18.132032] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196063 ] 00:05:23.782 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.782 [2024-07-15 10:18:18.194471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.782 [2024-07-15 10:18:18.309549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.782 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:25.152 ====================================== 00:05:25.152 busy:2702889025 (cyc) 00:05:25.152 total_run_count: 3868000 00:05:25.152 tsc_hz: 2700000000 (cyc) 00:05:25.152 ====================================== 00:05:25.152 poller_cost: 698 (cyc), 258 (nsec) 00:05:25.152 00:05:25.152 real 0m1.317s 00:05:25.152 user 0m1.236s 00:05:25.152 sys 0m0.075s 00:05:25.152 10:18:19 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.152 10:18:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.152 ************************************ 00:05:25.152 END TEST thread_poller_perf 00:05:25.152 ************************************ 00:05:25.152 10:18:19 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:25.152 10:18:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:25.152 00:05:25.152 real 0m2.790s 00:05:25.152 user 0m2.538s 00:05:25.152 sys 0m0.252s 00:05:25.152 10:18:19 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.152 10:18:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.152 ************************************ 00:05:25.152 END TEST thread 00:05:25.152 ************************************ 00:05:25.152 10:18:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.152 10:18:19 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:25.153 10:18:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.153 10:18:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.153 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:05:25.153 ************************************ 00:05:25.153 START TEST accel 00:05:25.153 ************************************ 00:05:25.153 10:18:19 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:25.153 * Looking for test storage... 00:05:25.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:25.153 10:18:19 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:25.153 10:18:19 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:25.153 10:18:19 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:25.153 10:18:19 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2196256 00:05:25.153 10:18:19 accel -- accel/accel.sh@63 -- # waitforlisten 2196256 00:05:25.153 10:18:19 accel -- common/autotest_common.sh@829 -- # '[' -z 2196256 ']' 00:05:25.153 10:18:19 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.153 10:18:19 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:25.153 10:18:19 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:25.153 10:18:19 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.153 10:18:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.153 10:18:19 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.153 10:18:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.153 10:18:19 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.153 10:18:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.153 10:18:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.153 10:18:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.153 10:18:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.153 10:18:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:25.153 10:18:19 accel -- accel/accel.sh@41 -- # jq -r . 00:05:25.153 [2024-07-15 10:18:19.621136] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:25.153 [2024-07-15 10:18:19.621233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196256 ] 00:05:25.153 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.153 [2024-07-15 10:18:19.682016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.153 [2024-07-15 10:18:19.791060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.410 10:18:20 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.410 10:18:20 accel -- common/autotest_common.sh@862 -- # return 0 00:05:25.410 10:18:20 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:25.410 10:18:20 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:25.410 10:18:20 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:25.410 10:18:20 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:25.411 10:18:20 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:25.411 10:18:20 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:25.411 10:18:20 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.411 10:18:20 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:25.411 10:18:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.668 10:18:20 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.668 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.668 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.668 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.669 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.669 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.669 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.669 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.669 10:18:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:25.669 10:18:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:25.669 10:18:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:25.669 10:18:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:25.669 10:18:20 accel -- accel/accel.sh@75 -- # killprocess 2196256 00:05:25.669 10:18:20 accel -- common/autotest_common.sh@948 -- # '[' -z 2196256 ']' 00:05:25.669 10:18:20 accel -- common/autotest_common.sh@952 -- # kill -0 2196256 00:05:25.669 10:18:20 accel -- common/autotest_common.sh@953 -- # uname 00:05:25.669 10:18:20 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.669 10:18:20 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2196256 00:05:25.669 10:18:20 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.669 10:18:20 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.669 10:18:20 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2196256' 00:05:25.669 killing process with pid 2196256 00:05:25.669 10:18:20 accel -- common/autotest_common.sh@967 -- # kill 2196256 00:05:25.669 10:18:20 accel -- common/autotest_common.sh@972 -- # wait 2196256 00:05:26.234 10:18:20 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:26.234 10:18:20 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:26.234 10:18:20 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:26.234 10:18:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.234 10:18:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.234 10:18:20 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:26.234 10:18:20 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:26.234 10:18:20 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:26.234 10:18:20 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.235 10:18:20 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.235 10:18:20 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.235 10:18:20 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.235 10:18:20 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.235 10:18:20 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:26.235 10:18:20 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:26.235 10:18:20 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.235 10:18:20 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:26.235 10:18:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.235 10:18:20 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:26.235 10:18:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:26.235 10:18:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.235 10:18:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.235 ************************************ 00:05:26.235 START TEST accel_missing_filename 00:05:26.235 ************************************ 00:05:26.235 10:18:20 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:26.235 10:18:20 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:26.235 10:18:20 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:26.235 10:18:20 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:26.235 10:18:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.235 10:18:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:26.235 10:18:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.235 10:18:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:26.235 10:18:20 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:26.235 10:18:20 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:26.235 10:18:20 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.235 10:18:20 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.235 10:18:20 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.235 10:18:20 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.235 10:18:20 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.235 10:18:20 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:26.235 10:18:20 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:26.235 [2024-07-15 10:18:20.698471] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:26.235 [2024-07-15 10:18:20.698538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196424 ] 00:05:26.235 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.235 [2024-07-15 10:18:20.762458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.235 [2024-07-15 10:18:20.882725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.493 [2024-07-15 10:18:20.945022] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:26.493 [2024-07-15 10:18:21.033591] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:26.751 A filename is required. 00:05:26.751 10:18:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:26.751 10:18:21 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.751 10:18:21 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:26.751 10:18:21 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:26.751 10:18:21 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:26.751 10:18:21 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.751 00:05:26.751 real 0m0.480s 00:05:26.751 user 0m0.370s 00:05:26.751 sys 0m0.143s 00:05:26.751 10:18:21 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.751 10:18:21 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:26.751 ************************************ 00:05:26.751 END TEST accel_missing_filename 00:05:26.751 ************************************ 00:05:26.751 10:18:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.751 10:18:21 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:26.751 10:18:21 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:26.751 10:18:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.751 10:18:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.751 ************************************ 00:05:26.751 START TEST accel_compress_verify 00:05:26.751 ************************************ 00:05:26.751 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:26.751 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:26.751 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:26.751 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:26.751 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.751 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:26.751 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.751 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:26.751 10:18:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:26.751 10:18:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:26.751 10:18:21 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.751 10:18:21 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.751 10:18:21 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.751 10:18:21 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.751 10:18:21 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.751 10:18:21 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:26.751 10:18:21 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:26.751 [2024-07-15 10:18:21.222772] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:26.751 [2024-07-15 10:18:21.222835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196568 ] 00:05:26.751 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.751 [2024-07-15 10:18:21.286217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.008 [2024-07-15 10:18:21.402835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.008 [2024-07-15 10:18:21.464406] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.008 [2024-07-15 10:18:21.548600] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:27.266 00:05:27.266 Compression does not support the verify option, aborting. 00:05:27.266 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:27.266 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.266 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:27.266 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:27.266 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:27.266 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.266 00:05:27.266 real 0m0.470s 00:05:27.266 user 0m0.358s 00:05:27.266 sys 0m0.145s 00:05:27.266 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.266 10:18:21 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:27.266 ************************************ 00:05:27.266 END TEST accel_compress_verify 00:05:27.266 ************************************ 00:05:27.266 10:18:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.266 10:18:21 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:27.266 10:18:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:27.266 10:18:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.266 10:18:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.266 ************************************ 00:05:27.266 START TEST accel_wrong_workload 00:05:27.266 ************************************ 00:05:27.266 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:27.267 10:18:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:27.267 10:18:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:27.267 10:18:21 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.267 10:18:21 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.267 10:18:21 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.267 10:18:21 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.267 10:18:21 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.267 10:18:21 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:27.267 10:18:21 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:27.267 Unsupported workload type: foobar 00:05:27.267 [2024-07-15 10:18:21.737968] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:27.267 accel_perf options: 00:05:27.267 [-h help message] 00:05:27.267 [-q queue depth per core] 00:05:27.267 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:27.267 [-T number of threads per core 00:05:27.267 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:27.267 [-t time in seconds] 00:05:27.267 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:27.267 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:27.267 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:27.267 [-l for compress/decompress workloads, name of uncompressed input file 00:05:27.267 [-S for crc32c workload, use this seed value (default 0) 00:05:27.267 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:27.267 [-f for fill workload, use this BYTE value (default 255) 00:05:27.267 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:27.267 [-y verify result if this switch is on] 00:05:27.267 [-a tasks to allocate per core (default: same value as -q)] 00:05:27.267 Can be used to spread operations across a wider range of memory. 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.267 00:05:27.267 real 0m0.023s 00:05:27.267 user 0m0.013s 00:05:27.267 sys 0m0.010s 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.267 10:18:21 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:27.267 ************************************ 00:05:27.267 END TEST accel_wrong_workload 00:05:27.267 ************************************ 00:05:27.267 Error: writing output failed: Broken pipe 00:05:27.267 10:18:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.267 10:18:21 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:27.267 10:18:21 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:27.267 10:18:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.267 10:18:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.267 ************************************ 00:05:27.267 START TEST accel_negative_buffers 00:05:27.267 ************************************ 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:27.267 10:18:21 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:27.267 10:18:21 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:27.267 10:18:21 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.267 10:18:21 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.267 10:18:21 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.267 10:18:21 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.267 10:18:21 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.267 10:18:21 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:27.267 10:18:21 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:27.267 -x option must be non-negative. 00:05:27.267 [2024-07-15 10:18:21.809339] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:27.267 accel_perf options: 00:05:27.267 [-h help message] 00:05:27.267 [-q queue depth per core] 00:05:27.267 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:27.267 [-T number of threads per core 00:05:27.267 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:27.267 [-t time in seconds] 00:05:27.267 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:27.267 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:27.267 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:27.267 [-l for compress/decompress workloads, name of uncompressed input file 00:05:27.267 [-S for crc32c workload, use this seed value (default 0) 00:05:27.267 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:27.267 [-f for fill workload, use this BYTE value (default 255) 00:05:27.267 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:27.267 [-y verify result if this switch is on] 00:05:27.267 [-a tasks to allocate per core (default: same value as -q)] 00:05:27.267 Can be used to spread operations across a wider range of memory. 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.267 00:05:27.267 real 0m0.024s 00:05:27.267 user 0m0.010s 00:05:27.267 sys 0m0.015s 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.267 10:18:21 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:27.267 ************************************ 00:05:27.267 END TEST accel_negative_buffers 00:05:27.267 ************************************ 00:05:27.267 Error: writing output failed: Broken pipe 00:05:27.267 10:18:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.267 10:18:21 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:27.267 10:18:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:27.267 10:18:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.267 10:18:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.267 ************************************ 00:05:27.267 START TEST accel_crc32c 00:05:27.267 ************************************ 00:05:27.267 10:18:21 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:27.267 10:18:21 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:27.267 [2024-07-15 10:18:21.873083] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:27.268 [2024-07-15 10:18:21.873148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196635 ] 00:05:27.268 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.526 [2024-07-15 10:18:21.937782] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.526 [2024-07-15 10:18:22.055345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.526 10:18:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:28.898 10:18:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.898 00:05:28.898 real 0m1.474s 00:05:28.898 user 0m1.325s 00:05:28.898 sys 0m0.152s 00:05:28.898 10:18:23 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.898 10:18:23 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:28.898 ************************************ 00:05:28.898 END TEST accel_crc32c 00:05:28.898 ************************************ 00:05:28.898 10:18:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.898 10:18:23 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:28.898 10:18:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:28.898 10:18:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.898 10:18:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.898 ************************************ 00:05:28.898 START TEST accel_crc32c_C2 00:05:28.898 ************************************ 00:05:28.898 10:18:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:28.899 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:28.899 [2024-07-15 10:18:23.391712] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:28.899 [2024-07-15 10:18:23.391764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196909 ] 00:05:28.899 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.899 [2024-07-15 10:18:23.454085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.157 [2024-07-15 10:18:23.573955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:29.157 10:18:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.529 00:05:30.529 real 0m1.461s 00:05:30.529 user 0m1.325s 00:05:30.529 sys 0m0.137s 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.529 10:18:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:30.529 ************************************ 00:05:30.529 END TEST accel_crc32c_C2 00:05:30.529 ************************************ 00:05:30.529 10:18:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.529 10:18:24 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:30.529 10:18:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:30.529 10:18:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.529 10:18:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.529 ************************************ 00:05:30.529 START TEST accel_copy 00:05:30.529 ************************************ 00:05:30.529 10:18:24 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:30.529 10:18:24 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:30.529 [2024-07-15 10:18:24.903408] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:30.529 [2024-07-15 10:18:24.903471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197070 ] 00:05:30.529 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.529 [2024-07-15 10:18:24.961106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.529 [2024-07-15 10:18:25.077196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.529 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.530 10:18:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:31.903 10:18:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.903 00:05:31.903 real 0m1.466s 00:05:31.903 user 0m1.326s 00:05:31.903 sys 0m0.142s 00:05:31.903 10:18:26 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.903 10:18:26 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:31.903 ************************************ 00:05:31.903 END TEST accel_copy 00:05:31.903 ************************************ 00:05:31.903 10:18:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.903 10:18:26 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:31.903 10:18:26 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:31.903 10:18:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.903 10:18:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.903 ************************************ 00:05:31.903 START TEST accel_fill 00:05:31.903 ************************************ 00:05:31.903 10:18:26 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:31.903 10:18:26 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:31.903 [2024-07-15 10:18:26.418453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:31.903 [2024-07-15 10:18:26.418521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197229 ] 00:05:31.903 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.903 [2024-07-15 10:18:26.482461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.161 [2024-07-15 10:18:26.599795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.161 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:32.161 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.161 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.161 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.161 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:32.161 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:32.162 10:18:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:33.534 10:18:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.534 00:05:33.534 real 0m1.473s 00:05:33.534 user 0m1.333s 00:05:33.534 sys 0m0.142s 00:05:33.534 10:18:27 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.534 10:18:27 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:33.534 ************************************ 00:05:33.534 END TEST accel_fill 00:05:33.534 ************************************ 00:05:33.534 10:18:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.534 10:18:27 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:33.534 10:18:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:33.534 10:18:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.534 10:18:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.534 ************************************ 00:05:33.534 START TEST accel_copy_crc32c 00:05:33.534 ************************************ 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:33.534 10:18:27 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:33.534 [2024-07-15 10:18:27.938896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:33.534 [2024-07-15 10:18:27.938988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197497 ] 00:05:33.534 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.534 [2024-07-15 10:18:28.001817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.534 [2024-07-15 10:18:28.120008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.534 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.792 10:18:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:35.227 10:18:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.227 00:05:35.227 real 0m1.465s 00:05:35.227 user 0m1.319s 00:05:35.227 sys 0m0.147s 00:05:35.228 10:18:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.228 10:18:29 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:35.228 ************************************ 00:05:35.228 END TEST accel_copy_crc32c 00:05:35.228 ************************************ 00:05:35.228 10:18:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.228 10:18:29 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:35.228 10:18:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:35.228 10:18:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.228 10:18:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.228 ************************************ 00:05:35.228 START TEST accel_copy_crc32c_C2 00:05:35.228 ************************************ 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:35.228 [2024-07-15 10:18:29.445683] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:35.228 [2024-07-15 10:18:29.445737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197664 ] 00:05:35.228 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.228 [2024-07-15 10:18:29.508277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.228 [2024-07-15 10:18:29.626715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.228 10:18:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.599 00:05:36.599 real 0m1.482s 00:05:36.599 user 0m1.333s 00:05:36.599 sys 0m0.152s 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.599 10:18:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:36.599 ************************************ 00:05:36.599 END TEST accel_copy_crc32c_C2 00:05:36.599 ************************************ 00:05:36.599 10:18:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.599 10:18:30 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:36.599 10:18:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:36.599 10:18:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.599 10:18:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.599 ************************************ 00:05:36.599 START TEST accel_dualcast 00:05:36.599 ************************************ 00:05:36.599 10:18:30 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:36.599 10:18:30 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:36.599 [2024-07-15 10:18:30.979465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:36.599 [2024-07-15 10:18:30.979533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197819 ] 00:05:36.599 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.599 [2024-07-15 10:18:31.042672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.599 [2024-07-15 10:18:31.163866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.599 10:18:31 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.600 10:18:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:37.972 10:18:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.972 00:05:37.972 real 0m1.483s 00:05:37.972 user 0m1.331s 00:05:37.972 sys 0m0.153s 00:05:37.972 10:18:32 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.972 10:18:32 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:37.972 ************************************ 00:05:37.972 END TEST accel_dualcast 00:05:37.972 ************************************ 00:05:37.972 10:18:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.972 10:18:32 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:37.972 10:18:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:37.972 10:18:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.972 10:18:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.972 ************************************ 00:05:37.972 START TEST accel_compare 00:05:37.972 ************************************ 00:05:37.972 10:18:32 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:37.972 10:18:32 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:37.972 10:18:32 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:37.972 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.972 10:18:32 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:37.972 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.972 10:18:32 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:37.973 10:18:32 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:37.973 10:18:32 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.973 10:18:32 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.973 10:18:32 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.973 10:18:32 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.973 10:18:32 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.973 10:18:32 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:37.973 10:18:32 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:37.973 [2024-07-15 10:18:32.513107] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:37.973 [2024-07-15 10:18:32.513171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198091 ] 00:05:37.973 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.973 [2024-07-15 10:18:32.577236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.231 [2024-07-15 10:18:32.696749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:38.231 10:18:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:39.602 10:18:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.602 00:05:39.602 real 0m1.479s 00:05:39.602 user 0m1.334s 00:05:39.602 sys 0m0.148s 00:05:39.602 10:18:33 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.602 10:18:33 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:39.602 ************************************ 00:05:39.602 END TEST accel_compare 00:05:39.602 ************************************ 00:05:39.602 10:18:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.602 10:18:33 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:39.602 10:18:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:39.602 10:18:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.602 10:18:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.602 ************************************ 00:05:39.602 START TEST accel_xor 00:05:39.602 ************************************ 00:05:39.602 10:18:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:39.602 10:18:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:39.602 [2024-07-15 10:18:34.041091] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:39.602 [2024-07-15 10:18:34.041154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198259 ] 00:05:39.602 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.602 [2024-07-15 10:18:34.105276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.602 [2024-07-15 10:18:34.227671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.860 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.861 10:18:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.233 00:05:41.233 real 0m1.493s 00:05:41.233 user 0m1.347s 00:05:41.233 sys 0m0.148s 00:05:41.233 10:18:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.233 10:18:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:41.233 ************************************ 00:05:41.233 END TEST accel_xor 00:05:41.233 ************************************ 00:05:41.233 10:18:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.233 10:18:35 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:41.233 10:18:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:41.233 10:18:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.233 10:18:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.233 ************************************ 00:05:41.233 START TEST accel_xor 00:05:41.233 ************************************ 00:05:41.233 10:18:35 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:41.233 [2024-07-15 10:18:35.581350] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:41.233 [2024-07-15 10:18:35.581414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198411 ] 00:05:41.233 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.233 [2024-07-15 10:18:35.644860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.233 [2024-07-15 10:18:35.766425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.233 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:41.234 10:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.607 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:42.608 10:18:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.608 00:05:42.608 real 0m1.487s 00:05:42.608 user 0m1.347s 00:05:42.608 sys 0m0.142s 00:05:42.608 10:18:37 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.608 10:18:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:42.608 ************************************ 00:05:42.608 END TEST accel_xor 00:05:42.608 ************************************ 00:05:42.608 10:18:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.608 10:18:37 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:42.608 10:18:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:42.608 10:18:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.608 10:18:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.608 ************************************ 00:05:42.608 START TEST accel_dif_verify 00:05:42.608 ************************************ 00:05:42.608 10:18:37 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:42.608 10:18:37 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:42.608 [2024-07-15 10:18:37.118335] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:42.608 [2024-07-15 10:18:37.118398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198685 ] 00:05:42.608 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.608 [2024-07-15 10:18:37.181347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.865 [2024-07-15 10:18:37.300513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:42.865 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.866 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.866 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.866 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.866 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.866 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.866 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.866 10:18:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.866 10:18:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.866 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.866 10:18:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:44.235 10:18:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.235 00:05:44.235 real 0m1.475s 00:05:44.235 user 0m1.340s 00:05:44.235 sys 0m0.139s 00:05:44.235 10:18:38 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.235 10:18:38 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:44.235 ************************************ 00:05:44.235 END TEST accel_dif_verify 00:05:44.235 ************************************ 00:05:44.235 10:18:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.235 10:18:38 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:44.235 10:18:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:44.235 10:18:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.235 10:18:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.235 ************************************ 00:05:44.235 START TEST accel_dif_generate 00:05:44.235 ************************************ 00:05:44.235 10:18:38 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:44.235 10:18:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:44.235 [2024-07-15 10:18:38.640656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:44.235 [2024-07-15 10:18:38.640719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198845 ] 00:05:44.235 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.235 [2024-07-15 10:18:38.704443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.235 [2024-07-15 10:18:38.830540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.493 10:18:38 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:44.494 10:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:45.865 10:18:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.865 00:05:45.865 real 0m1.492s 00:05:45.865 user 0m1.345s 00:05:45.865 sys 0m0.150s 00:05:45.865 10:18:40 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.865 10:18:40 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:45.865 ************************************ 00:05:45.865 END TEST accel_dif_generate 00:05:45.865 ************************************ 00:05:45.865 10:18:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.865 10:18:40 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:45.865 10:18:40 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:45.865 10:18:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.865 10:18:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.865 ************************************ 00:05:45.865 START TEST accel_dif_generate_copy 00:05:45.865 ************************************ 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:45.865 [2024-07-15 10:18:40.182227] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:45.865 [2024-07-15 10:18:40.182288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199024 ] 00:05:45.865 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.865 [2024-07-15 10:18:40.247010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.865 [2024-07-15 10:18:40.369756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.865 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.866 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.866 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.866 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.866 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.866 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.866 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.866 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.866 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.866 10:18:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.234 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.234 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.234 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.234 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.234 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.234 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.235 00:05:47.235 real 0m1.493s 00:05:47.235 user 0m1.342s 00:05:47.235 sys 0m0.152s 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.235 10:18:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:47.235 ************************************ 00:05:47.235 END TEST accel_dif_generate_copy 00:05:47.235 ************************************ 00:05:47.235 10:18:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.235 10:18:41 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:47.235 10:18:41 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:47.235 10:18:41 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:47.235 10:18:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.235 10:18:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.235 ************************************ 00:05:47.235 START TEST accel_comp 00:05:47.235 ************************************ 00:05:47.235 10:18:41 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:47.235 10:18:41 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:47.235 [2024-07-15 10:18:41.727547] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:47.235 [2024-07-15 10:18:41.727617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199271 ] 00:05:47.235 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.235 [2024-07-15 10:18:41.789870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.493 [2024-07-15 10:18:41.911448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:47.493 10:18:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:48.865 10:18:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.865 00:05:48.865 real 0m1.489s 00:05:48.865 user 0m1.348s 00:05:48.865 sys 0m0.144s 00:05:48.865 10:18:43 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.865 10:18:43 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:48.865 ************************************ 00:05:48.865 END TEST accel_comp 00:05:48.865 ************************************ 00:05:48.865 10:18:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.865 10:18:43 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.865 10:18:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:48.865 10:18:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.865 10:18:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.865 ************************************ 00:05:48.865 START TEST accel_decomp 00:05:48.865 ************************************ 00:05:48.865 10:18:43 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:48.865 [2024-07-15 10:18:43.264718] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:48.865 [2024-07-15 10:18:43.264784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199435 ] 00:05:48.865 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.865 [2024-07-15 10:18:43.327613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.865 [2024-07-15 10:18:43.446830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.865 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.866 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:49.123 10:18:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:50.491 10:18:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.491 00:05:50.491 real 0m1.487s 00:05:50.491 user 0m1.351s 00:05:50.491 sys 0m0.138s 00:05:50.491 10:18:44 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.491 10:18:44 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:50.491 ************************************ 00:05:50.491 END TEST accel_decomp 00:05:50.491 ************************************ 00:05:50.491 10:18:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.491 10:18:44 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:50.491 10:18:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:50.491 10:18:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.491 10:18:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.491 ************************************ 00:05:50.491 START TEST accel_decomp_full 00:05:50.491 ************************************ 00:05:50.491 10:18:44 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:50.491 10:18:44 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:50.491 [2024-07-15 10:18:44.796251] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:50.491 [2024-07-15 10:18:44.796317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199662 ] 00:05:50.491 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.491 [2024-07-15 10:18:44.855246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.491 [2024-07-15 10:18:44.974053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.491 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.492 10:18:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:51.888 10:18:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.888 00:05:51.888 real 0m1.501s 00:05:51.888 user 0m1.356s 00:05:51.888 sys 0m0.147s 00:05:51.888 10:18:46 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.888 10:18:46 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:51.888 ************************************ 00:05:51.888 END TEST accel_decomp_full 00:05:51.888 ************************************ 00:05:51.888 10:18:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.888 10:18:46 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:51.888 10:18:46 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:51.888 10:18:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.888 10:18:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.888 ************************************ 00:05:51.888 START TEST accel_decomp_mcore 00:05:51.888 ************************************ 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:51.888 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:51.888 [2024-07-15 10:18:46.345286] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:51.888 [2024-07-15 10:18:46.345354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199868 ] 00:05:51.888 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.888 [2024-07-15 10:18:46.408676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.888 [2024-07-15 10:18:46.532944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.888 [2024-07-15 10:18:46.533006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.888 [2024-07-15 10:18:46.533059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.888 [2024-07-15 10:18:46.533063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.146 10:18:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.517 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.518 00:05:53.518 real 0m1.498s 00:05:53.518 user 0m4.811s 00:05:53.518 sys 0m0.153s 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.518 10:18:47 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:53.518 ************************************ 00:05:53.518 END TEST accel_decomp_mcore 00:05:53.518 ************************************ 00:05:53.518 10:18:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.518 10:18:47 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:53.518 10:18:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:53.518 10:18:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.518 10:18:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.518 ************************************ 00:05:53.518 START TEST accel_decomp_full_mcore 00:05:53.518 ************************************ 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:53.518 10:18:47 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:53.518 [2024-07-15 10:18:47.890491] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:53.518 [2024-07-15 10:18:47.890557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200027 ] 00:05:53.518 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.518 [2024-07-15 10:18:47.959609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.518 [2024-07-15 10:18:48.086668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.518 [2024-07-15 10:18:48.086719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.518 [2024-07-15 10:18:48.086770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.518 [2024-07-15 10:18:48.086773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.518 10:18:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.889 00:05:54.889 real 0m1.504s 00:05:54.889 user 0m4.817s 00:05:54.889 sys 0m0.162s 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.889 10:18:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:54.889 ************************************ 00:05:54.889 END TEST accel_decomp_full_mcore 00:05:54.889 ************************************ 00:05:54.889 10:18:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.889 10:18:49 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:54.889 10:18:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:54.889 10:18:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.889 10:18:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.889 ************************************ 00:05:54.889 START TEST accel_decomp_mthread 00:05:54.889 ************************************ 00:05:54.889 10:18:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:54.889 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:54.889 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:54.889 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.889 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:54.889 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.889 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:54.889 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:54.889 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.890 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.890 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.890 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.890 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.890 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:54.890 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:54.890 [2024-07-15 10:18:49.445501] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:54.890 [2024-07-15 10:18:49.445566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200309 ] 00:05:54.890 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.890 [2024-07-15 10:18:49.508040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.148 [2024-07-15 10:18:49.631349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.148 10:18:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.519 00:05:56.519 real 0m1.499s 00:05:56.519 user 0m1.351s 00:05:56.519 sys 0m0.151s 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.519 10:18:50 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:56.519 ************************************ 00:05:56.519 END TEST accel_decomp_mthread 00:05:56.519 ************************************ 00:05:56.519 10:18:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.519 10:18:50 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.519 10:18:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:56.519 10:18:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.519 10:18:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.519 ************************************ 00:05:56.519 START TEST accel_decomp_full_mthread 00:05:56.519 ************************************ 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:56.519 10:18:50 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:56.519 [2024-07-15 10:18:50.993861] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:56.519 [2024-07-15 10:18:50.993950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200460 ] 00:05:56.519 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.519 [2024-07-15 10:18:51.056274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.777 [2024-07-15 10:18:51.180027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.777 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.778 10:18:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.164 00:05:58.164 real 0m1.530s 00:05:58.164 user 0m1.385s 00:05:58.164 sys 0m0.147s 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.164 10:18:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:58.164 ************************************ 00:05:58.164 END TEST accel_decomp_full_mthread 00:05:58.164 ************************************ 00:05:58.164 10:18:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.164 10:18:52 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:58.164 10:18:52 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:58.164 10:18:52 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:58.164 10:18:52 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:58.164 10:18:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.164 10:18:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.164 10:18:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.164 10:18:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.164 10:18:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.164 10:18:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.164 10:18:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.164 10:18:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:58.164 10:18:52 accel -- accel/accel.sh@41 -- # jq -r . 00:05:58.164 ************************************ 00:05:58.164 START TEST accel_dif_functional_tests 00:05:58.164 ************************************ 00:05:58.164 10:18:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:58.164 [2024-07-15 10:18:52.597545] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:58.164 [2024-07-15 10:18:52.597609] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200625 ] 00:05:58.164 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.164 [2024-07-15 10:18:52.662477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.164 [2024-07-15 10:18:52.789692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.164 [2024-07-15 10:18:52.789753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.164 [2024-07-15 10:18:52.789756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.421 00:05:58.421 00:05:58.421 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.421 http://cunit.sourceforge.net/ 00:05:58.421 00:05:58.421 00:05:58.421 Suite: accel_dif 00:05:58.421 Test: verify: DIF generated, GUARD check ...passed 00:05:58.421 Test: verify: DIF generated, APPTAG check ...passed 00:05:58.421 Test: verify: DIF generated, REFTAG check ...passed 00:05:58.421 Test: verify: DIF not generated, GUARD check ...[2024-07-15 10:18:52.890930] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:58.421 passed 00:05:58.421 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 10:18:52.890998] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:58.421 passed 00:05:58.421 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 10:18:52.891032] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:58.421 passed 00:05:58.421 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:58.421 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 10:18:52.891104] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:58.421 passed 00:05:58.422 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:58.422 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:58.422 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:58.422 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 10:18:52.891253] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:58.422 passed 00:05:58.422 Test: verify copy: DIF generated, GUARD check ...passed 00:05:58.422 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:58.422 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:58.422 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 10:18:52.891405] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:58.422 passed 00:05:58.422 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 10:18:52.891441] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:58.422 passed 00:05:58.422 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 10:18:52.891474] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:58.422 passed 00:05:58.422 Test: generate copy: DIF generated, GUARD check ...passed 00:05:58.422 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:58.422 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:58.422 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:58.422 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:58.422 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:58.422 Test: generate copy: iovecs-len validate ...[2024-07-15 10:18:52.891695] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:58.422 passed 00:05:58.422 Test: generate copy: buffer alignment validate ...passed 00:05:58.422 00:05:58.422 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.422 suites 1 1 n/a 0 0 00:05:58.422 tests 26 26 26 0 0 00:05:58.422 asserts 115 115 115 0 n/a 00:05:58.422 00:05:58.422 Elapsed time = 0.003 seconds 00:05:58.680 00:05:58.680 real 0m0.598s 00:05:58.680 user 0m0.894s 00:05:58.680 sys 0m0.189s 00:05:58.680 10:18:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.680 10:18:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 ************************************ 00:05:58.680 END TEST accel_dif_functional_tests 00:05:58.680 ************************************ 00:05:58.680 10:18:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.680 00:05:58.680 real 0m33.661s 00:05:58.680 user 0m36.983s 00:05:58.680 sys 0m4.697s 00:05:58.680 10:18:53 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.680 10:18:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 ************************************ 00:05:58.680 END TEST accel 00:05:58.680 ************************************ 00:05:58.680 10:18:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.680 10:18:53 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:58.680 10:18:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.680 10:18:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.680 10:18:53 -- common/autotest_common.sh@10 -- # set +x 00:05:58.680 ************************************ 00:05:58.680 START TEST accel_rpc 00:05:58.680 ************************************ 00:05:58.680 10:18:53 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:58.680 * Looking for test storage... 00:05:58.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:58.680 10:18:53 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.680 10:18:53 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2200812 00:05:58.680 10:18:53 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:58.680 10:18:53 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2200812 00:05:58.680 10:18:53 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2200812 ']' 00:05:58.680 10:18:53 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.680 10:18:53 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.680 10:18:53 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.680 10:18:53 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.680 10:18:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.937 [2024-07-15 10:18:53.332419] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:58.937 [2024-07-15 10:18:53.332518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200812 ] 00:05:58.937 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.937 [2024-07-15 10:18:53.389408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.937 [2024-07-15 10:18:53.495357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.937 10:18:53 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.937 10:18:53 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.937 10:18:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:58.937 10:18:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:58.937 10:18:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:58.937 10:18:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:58.937 10:18:53 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:58.937 10:18:53 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.937 10:18:53 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.937 10:18:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.937 ************************************ 00:05:58.937 START TEST accel_assign_opcode 00:05:58.937 ************************************ 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:58.937 [2024-07-15 10:18:53.559978] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:58.937 [2024-07-15 10:18:53.567988] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.937 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.194 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.194 10:18:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:59.194 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.194 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.194 10:18:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:59.194 10:18:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:59.194 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.452 software 00:05:59.452 00:05:59.452 real 0m0.307s 00:05:59.452 user 0m0.041s 00:05:59.452 sys 0m0.008s 00:05:59.452 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.452 10:18:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:59.452 ************************************ 00:05:59.452 END TEST accel_assign_opcode 00:05:59.452 ************************************ 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.452 10:18:53 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2200812 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2200812 ']' 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2200812 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2200812 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2200812' 00:05:59.452 killing process with pid 2200812 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@967 -- # kill 2200812 00:05:59.452 10:18:53 accel_rpc -- common/autotest_common.sh@972 -- # wait 2200812 00:06:00.017 00:06:00.017 real 0m1.153s 00:06:00.017 user 0m1.083s 00:06:00.017 sys 0m0.426s 00:06:00.017 10:18:54 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.017 10:18:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.017 ************************************ 00:06:00.017 END TEST accel_rpc 00:06:00.017 ************************************ 00:06:00.017 10:18:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.017 10:18:54 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:00.017 10:18:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.017 10:18:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.017 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:00.017 ************************************ 00:06:00.017 START TEST app_cmdline 00:06:00.017 ************************************ 00:06:00.017 10:18:54 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:00.017 * Looking for test storage... 00:06:00.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:00.017 10:18:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:00.017 10:18:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2201018 00:06:00.017 10:18:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:00.017 10:18:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2201018 00:06:00.017 10:18:54 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2201018 ']' 00:06:00.017 10:18:54 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.017 10:18:54 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.017 10:18:54 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.017 10:18:54 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.017 10:18:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.017 [2024-07-15 10:18:54.524062] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:00.017 [2024-07-15 10:18:54.524157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201018 ] 00:06:00.017 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.017 [2024-07-15 10:18:54.590039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.274 [2024-07-15 10:18:54.715277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.532 10:18:54 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.532 10:18:54 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:00.532 10:18:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:00.789 { 00:06:00.789 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:06:00.789 "fields": { 00:06:00.789 "major": 24, 00:06:00.789 "minor": 9, 00:06:00.789 "patch": 0, 00:06:00.789 "suffix": "-pre", 00:06:00.789 "commit": "719d03c6a" 00:06:00.789 } 00:06:00.789 } 00:06:00.789 10:18:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:00.789 10:18:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:00.789 10:18:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:00.789 10:18:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:00.789 10:18:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:00.789 10:18:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.789 10:18:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.789 10:18:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:00.789 10:18:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:00.789 10:18:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:00.789 10:18:55 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.048 request: 00:06:01.048 { 00:06:01.048 "method": "env_dpdk_get_mem_stats", 00:06:01.048 "req_id": 1 00:06:01.048 } 00:06:01.048 Got JSON-RPC error response 00:06:01.048 response: 00:06:01.048 { 00:06:01.048 "code": -32601, 00:06:01.048 "message": "Method not found" 00:06:01.048 } 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.048 10:18:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2201018 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2201018 ']' 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2201018 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2201018 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2201018' 00:06:01.048 killing process with pid 2201018 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@967 -- # kill 2201018 00:06:01.048 10:18:55 app_cmdline -- common/autotest_common.sh@972 -- # wait 2201018 00:06:01.616 00:06:01.616 real 0m1.640s 00:06:01.616 user 0m2.039s 00:06:01.616 sys 0m0.472s 00:06:01.616 10:18:56 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.616 10:18:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.616 ************************************ 00:06:01.616 END TEST app_cmdline 00:06:01.616 ************************************ 00:06:01.616 10:18:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.616 10:18:56 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:01.616 10:18:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.616 10:18:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.616 10:18:56 -- common/autotest_common.sh@10 -- # set +x 00:06:01.616 ************************************ 00:06:01.616 START TEST version 00:06:01.616 ************************************ 00:06:01.616 10:18:56 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:01.616 * Looking for test storage... 00:06:01.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:01.616 10:18:56 version -- app/version.sh@17 -- # get_header_version major 00:06:01.616 10:18:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.616 10:18:56 version -- app/version.sh@14 -- # cut -f2 00:06:01.616 10:18:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.616 10:18:56 version -- app/version.sh@17 -- # major=24 00:06:01.616 10:18:56 version -- app/version.sh@18 -- # get_header_version minor 00:06:01.616 10:18:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.616 10:18:56 version -- app/version.sh@14 -- # cut -f2 00:06:01.616 10:18:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.616 10:18:56 version -- app/version.sh@18 -- # minor=9 00:06:01.616 10:18:56 version -- app/version.sh@19 -- # get_header_version patch 00:06:01.616 10:18:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.616 10:18:56 version -- app/version.sh@14 -- # cut -f2 00:06:01.616 10:18:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.616 10:18:56 version -- app/version.sh@19 -- # patch=0 00:06:01.616 10:18:56 version -- app/version.sh@20 -- # get_header_version suffix 00:06:01.616 10:18:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:01.616 10:18:56 version -- app/version.sh@14 -- # cut -f2 00:06:01.616 10:18:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.616 10:18:56 version -- app/version.sh@20 -- # suffix=-pre 00:06:01.616 10:18:56 version -- app/version.sh@22 -- # version=24.9 00:06:01.616 10:18:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:01.616 10:18:56 version -- app/version.sh@28 -- # version=24.9rc0 00:06:01.616 10:18:56 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:01.616 10:18:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:01.616 10:18:56 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:01.616 10:18:56 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:01.616 00:06:01.616 real 0m0.111s 00:06:01.616 user 0m0.061s 00:06:01.616 sys 0m0.072s 00:06:01.616 10:18:56 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.616 10:18:56 version -- common/autotest_common.sh@10 -- # set +x 00:06:01.616 ************************************ 00:06:01.616 END TEST version 00:06:01.616 ************************************ 00:06:01.616 10:18:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.616 10:18:56 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:01.616 10:18:56 -- spdk/autotest.sh@198 -- # uname -s 00:06:01.616 10:18:56 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:01.616 10:18:56 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:01.616 10:18:56 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:01.616 10:18:56 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:01.616 10:18:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:01.616 10:18:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:01.616 10:18:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.616 10:18:56 -- common/autotest_common.sh@10 -- # set +x 00:06:01.876 10:18:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:01.877 10:18:56 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:01.877 10:18:56 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:01.877 10:18:56 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:01.877 10:18:56 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:01.877 10:18:56 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:01.877 10:18:56 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:01.877 10:18:56 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:01.877 10:18:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.877 10:18:56 -- common/autotest_common.sh@10 -- # set +x 00:06:01.877 ************************************ 00:06:01.877 START TEST nvmf_tcp 00:06:01.877 ************************************ 00:06:01.877 10:18:56 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:01.877 * Looking for test storage... 00:06:01.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.877 10:18:56 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.877 10:18:56 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.877 10:18:56 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.877 10:18:56 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.877 10:18:56 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.877 10:18:56 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.877 10:18:56 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:01.877 10:18:56 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:01.877 10:18:56 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.877 10:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:01.877 10:18:56 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:01.877 10:18:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:01.877 10:18:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.877 10:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.877 ************************************ 00:06:01.877 START TEST nvmf_example 00:06:01.877 ************************************ 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:01.877 * Looking for test storage... 00:06:01.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:01.877 10:18:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:01.878 10:18:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:03.806 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:03.806 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:03.806 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:03.806 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.806 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:04.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:06:04.065 00:06:04.065 --- 10.0.0.2 ping statistics --- 00:06:04.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.065 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:06:04.065 00:06:04.065 --- 10.0.0.1 ping statistics --- 00:06:04.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.065 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:04.065 10:18:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2203037 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2203037 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2203037 ']' 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.066 10:18:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:04.066 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.001 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:05.259 10:18:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:05.259 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.490 Initializing NVMe Controllers 00:06:17.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:17.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:17.490 Initialization complete. Launching workers. 00:06:17.490 ======================================================== 00:06:17.490 Latency(us) 00:06:17.490 Device Information : IOPS MiB/s Average min max 00:06:17.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15108.32 59.02 4236.81 812.43 18992.50 00:06:17.490 ======================================================== 00:06:17.490 Total : 15108.32 59.02 4236.81 812.43 18992.50 00:06:17.490 00:06:17.490 10:19:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:17.490 10:19:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:17.490 10:19:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:17.490 10:19:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:17.490 10:19:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:17.490 10:19:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:17.490 10:19:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:17.490 10:19:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:17.490 rmmod nvme_tcp 00:06:17.490 rmmod nvme_fabrics 00:06:17.490 rmmod nvme_keyring 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2203037 ']' 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2203037 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2203037 ']' 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2203037 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2203037 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2203037' 00:06:17.490 killing process with pid 2203037 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2203037 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2203037 00:06:17.490 nvmf threads initialize successfully 00:06:17.490 bdev subsystem init successfully 00:06:17.490 created a nvmf target service 00:06:17.490 create targets's poll groups done 00:06:17.490 all subsystems of target started 00:06:17.490 nvmf target is running 00:06:17.490 all subsystems of target stopped 00:06:17.490 destroy targets's poll groups done 00:06:17.490 destroyed the nvmf target service 00:06:17.490 bdev subsystem finish successfully 00:06:17.490 nvmf threads destroy successfully 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:17.490 10:19:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.749 10:19:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:17.749 10:19:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:17.749 10:19:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.749 10:19:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.749 00:06:17.749 real 0m15.986s 00:06:17.749 user 0m45.555s 00:06:17.749 sys 0m3.263s 00:06:17.749 10:19:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.749 10:19:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.749 ************************************ 00:06:17.749 END TEST nvmf_example 00:06:17.749 ************************************ 00:06:17.749 10:19:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:17.749 10:19:12 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:17.749 10:19:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:17.749 10:19:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.749 10:19:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.010 ************************************ 00:06:18.010 START TEST nvmf_filesystem 00:06:18.010 ************************************ 00:06:18.010 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:18.010 * Looking for test storage... 00:06:18.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.010 10:19:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:18.010 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:18.010 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:18.011 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:18.012 #define SPDK_CONFIG_H 00:06:18.012 #define SPDK_CONFIG_APPS 1 00:06:18.012 #define SPDK_CONFIG_ARCH native 00:06:18.012 #undef SPDK_CONFIG_ASAN 00:06:18.012 #undef SPDK_CONFIG_AVAHI 00:06:18.012 #undef SPDK_CONFIG_CET 00:06:18.012 #define SPDK_CONFIG_COVERAGE 1 00:06:18.012 #define SPDK_CONFIG_CROSS_PREFIX 00:06:18.012 #undef SPDK_CONFIG_CRYPTO 00:06:18.012 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:18.012 #undef SPDK_CONFIG_CUSTOMOCF 00:06:18.012 #undef SPDK_CONFIG_DAOS 00:06:18.012 #define SPDK_CONFIG_DAOS_DIR 00:06:18.012 #define SPDK_CONFIG_DEBUG 1 00:06:18.012 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:18.012 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:18.012 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:18.012 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:18.012 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:18.012 #undef SPDK_CONFIG_DPDK_UADK 00:06:18.012 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:18.012 #define SPDK_CONFIG_EXAMPLES 1 00:06:18.012 #undef SPDK_CONFIG_FC 00:06:18.012 #define SPDK_CONFIG_FC_PATH 00:06:18.012 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:18.012 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:18.012 #undef SPDK_CONFIG_FUSE 00:06:18.012 #undef SPDK_CONFIG_FUZZER 00:06:18.012 #define SPDK_CONFIG_FUZZER_LIB 00:06:18.012 #undef SPDK_CONFIG_GOLANG 00:06:18.012 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:18.012 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:18.012 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:18.012 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:18.012 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:18.012 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:18.012 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:18.012 #define SPDK_CONFIG_IDXD 1 00:06:18.012 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:18.012 #undef SPDK_CONFIG_IPSEC_MB 00:06:18.012 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:18.012 #define SPDK_CONFIG_ISAL 1 00:06:18.012 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:18.012 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:18.012 #define SPDK_CONFIG_LIBDIR 00:06:18.012 #undef SPDK_CONFIG_LTO 00:06:18.012 #define SPDK_CONFIG_MAX_LCORES 128 00:06:18.012 #define SPDK_CONFIG_NVME_CUSE 1 00:06:18.012 #undef SPDK_CONFIG_OCF 00:06:18.012 #define SPDK_CONFIG_OCF_PATH 00:06:18.012 #define SPDK_CONFIG_OPENSSL_PATH 00:06:18.012 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:18.012 #define SPDK_CONFIG_PGO_DIR 00:06:18.012 #undef SPDK_CONFIG_PGO_USE 00:06:18.012 #define SPDK_CONFIG_PREFIX /usr/local 00:06:18.012 #undef SPDK_CONFIG_RAID5F 00:06:18.012 #undef SPDK_CONFIG_RBD 00:06:18.012 #define SPDK_CONFIG_RDMA 1 00:06:18.012 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:18.012 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:18.012 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:18.012 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:18.012 #define SPDK_CONFIG_SHARED 1 00:06:18.012 #undef SPDK_CONFIG_SMA 00:06:18.012 #define SPDK_CONFIG_TESTS 1 00:06:18.012 #undef SPDK_CONFIG_TSAN 00:06:18.012 #define SPDK_CONFIG_UBLK 1 00:06:18.012 #define SPDK_CONFIG_UBSAN 1 00:06:18.012 #undef SPDK_CONFIG_UNIT_TESTS 00:06:18.012 #undef SPDK_CONFIG_URING 00:06:18.012 #define SPDK_CONFIG_URING_PATH 00:06:18.012 #undef SPDK_CONFIG_URING_ZNS 00:06:18.012 #undef SPDK_CONFIG_USDT 00:06:18.012 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:18.012 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:18.012 #define SPDK_CONFIG_VFIO_USER 1 00:06:18.012 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:18.012 #define SPDK_CONFIG_VHOST 1 00:06:18.012 #define SPDK_CONFIG_VIRTIO 1 00:06:18.012 #undef SPDK_CONFIG_VTUNE 00:06:18.012 #define SPDK_CONFIG_VTUNE_DIR 00:06:18.012 #define SPDK_CONFIG_WERROR 1 00:06:18.012 #define SPDK_CONFIG_WPDK_DIR 00:06:18.012 #undef SPDK_CONFIG_XNVME 00:06:18.012 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.012 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:18.013 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:18.014 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2204747 ]] 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2204747 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.C6QZUs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.C6QZUs/tests/target /tmp/spdk.C6QZUs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55521591296 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6473101312 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996475904 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=872448 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:18.015 * Looking for test storage... 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.015 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55521591296 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8687693824 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.016 10:19:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:18.017 10:19:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.546 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:20.547 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:20.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:20.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:20.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:20.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:06:20.547 00:06:20.547 --- 10.0.0.2 ping statistics --- 00:06:20.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.547 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:06:20.547 00:06:20.547 --- 10.0.0.1 ping statistics --- 00:06:20.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.547 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.547 ************************************ 00:06:20.547 START TEST nvmf_filesystem_no_in_capsule 00:06:20.547 ************************************ 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2206378 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2206378 00:06:20.547 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2206378 ']' 00:06:20.548 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.548 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.548 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.548 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.548 10:19:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.548 [2024-07-15 10:19:14.838129] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:20.548 [2024-07-15 10:19:14.838241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.548 [2024-07-15 10:19:14.908012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.548 [2024-07-15 10:19:15.031232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.548 [2024-07-15 10:19:15.031307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.548 [2024-07-15 10:19:15.031324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.548 [2024-07-15 10:19:15.031338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.548 [2024-07-15 10:19:15.031349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.548 [2024-07-15 10:19:15.031441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.548 [2024-07-15 10:19:15.031500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.548 [2024-07-15 10:19:15.031581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.548 [2024-07-15 10:19:15.031585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 [2024-07-15 10:19:15.851120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.477 10:19:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 Malloc1 00:06:21.477 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.477 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:21.477 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.477 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.477 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:21.477 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.477 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.478 [2024-07-15 10:19:16.041357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:21.478 { 00:06:21.478 "name": "Malloc1", 00:06:21.478 "aliases": [ 00:06:21.478 "165204a0-daab-4518-bfb1-16f5dbf1e5df" 00:06:21.478 ], 00:06:21.478 "product_name": "Malloc disk", 00:06:21.478 "block_size": 512, 00:06:21.478 "num_blocks": 1048576, 00:06:21.478 "uuid": "165204a0-daab-4518-bfb1-16f5dbf1e5df", 00:06:21.478 "assigned_rate_limits": { 00:06:21.478 "rw_ios_per_sec": 0, 00:06:21.478 "rw_mbytes_per_sec": 0, 00:06:21.478 "r_mbytes_per_sec": 0, 00:06:21.478 "w_mbytes_per_sec": 0 00:06:21.478 }, 00:06:21.478 "claimed": true, 00:06:21.478 "claim_type": "exclusive_write", 00:06:21.478 "zoned": false, 00:06:21.478 "supported_io_types": { 00:06:21.478 "read": true, 00:06:21.478 "write": true, 00:06:21.478 "unmap": true, 00:06:21.478 "flush": true, 00:06:21.478 "reset": true, 00:06:21.478 "nvme_admin": false, 00:06:21.478 "nvme_io": false, 00:06:21.478 "nvme_io_md": false, 00:06:21.478 "write_zeroes": true, 00:06:21.478 "zcopy": true, 00:06:21.478 "get_zone_info": false, 00:06:21.478 "zone_management": false, 00:06:21.478 "zone_append": false, 00:06:21.478 "compare": false, 00:06:21.478 "compare_and_write": false, 00:06:21.478 "abort": true, 00:06:21.478 "seek_hole": false, 00:06:21.478 "seek_data": false, 00:06:21.478 "copy": true, 00:06:21.478 "nvme_iov_md": false 00:06:21.478 }, 00:06:21.478 "memory_domains": [ 00:06:21.478 { 00:06:21.478 "dma_device_id": "system", 00:06:21.478 "dma_device_type": 1 00:06:21.478 }, 00:06:21.478 { 00:06:21.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.478 "dma_device_type": 2 00:06:21.478 } 00:06:21.478 ], 00:06:21.478 "driver_specific": {} 00:06:21.478 } 00:06:21.478 ]' 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:21.478 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:21.735 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:21.735 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:21.735 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:21.735 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:21.735 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:22.299 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:22.299 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:22.299 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:22.299 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:22.299 10:19:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:24.195 10:19:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:24.452 10:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:25.017 10:19:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:25.950 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:25.951 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:25.951 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:25.951 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.951 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.209 ************************************ 00:06:26.209 START TEST filesystem_ext4 00:06:26.209 ************************************ 00:06:26.209 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:26.209 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:26.209 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:26.209 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:26.209 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:26.209 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:26.209 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:26.210 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:26.210 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:26.210 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:26.210 10:19:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:26.210 mke2fs 1.46.5 (30-Dec-2021) 00:06:26.210 Discarding device blocks: 0/522240 done 00:06:26.210 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:26.210 Filesystem UUID: 4f7f1115-0d16-4557-a67c-8c8aecd404fa 00:06:26.210 Superblock backups stored on blocks: 00:06:26.210 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:26.210 00:06:26.210 Allocating group tables: 0/64 done 00:06:26.210 Writing inode tables: 0/64 done 00:06:26.467 Creating journal (8192 blocks): done 00:06:27.402 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:06:27.402 00:06:27.402 10:19:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:27.402 10:19:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:27.659 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:27.659 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:27.659 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:27.659 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:27.659 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:27.659 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2206378 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:27.917 00:06:27.917 real 0m1.735s 00:06:27.917 user 0m0.020s 00:06:27.917 sys 0m0.050s 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:27.917 ************************************ 00:06:27.917 END TEST filesystem_ext4 00:06:27.917 ************************************ 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.917 ************************************ 00:06:27.917 START TEST filesystem_btrfs 00:06:27.917 ************************************ 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:27.917 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:27.918 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:27.918 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:27.918 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:27.918 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:27.918 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:28.175 btrfs-progs v6.6.2 00:06:28.175 See https://btrfs.readthedocs.io for more information. 00:06:28.175 00:06:28.175 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:28.175 NOTE: several default settings have changed in version 5.15, please make sure 00:06:28.175 this does not affect your deployments: 00:06:28.175 - DUP for metadata (-m dup) 00:06:28.175 - enabled no-holes (-O no-holes) 00:06:28.175 - enabled free-space-tree (-R free-space-tree) 00:06:28.175 00:06:28.175 Label: (null) 00:06:28.175 UUID: 43f6e905-3c6f-4412-b67e-d41b3c83c250 00:06:28.175 Node size: 16384 00:06:28.175 Sector size: 4096 00:06:28.175 Filesystem size: 510.00MiB 00:06:28.175 Block group profiles: 00:06:28.175 Data: single 8.00MiB 00:06:28.175 Metadata: DUP 32.00MiB 00:06:28.175 System: DUP 8.00MiB 00:06:28.175 SSD detected: yes 00:06:28.175 Zoned device: no 00:06:28.175 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:28.175 Runtime features: free-space-tree 00:06:28.175 Checksum: crc32c 00:06:28.175 Number of devices: 1 00:06:28.175 Devices: 00:06:28.175 ID SIZE PATH 00:06:28.175 1 510.00MiB /dev/nvme0n1p1 00:06:28.175 00:06:28.175 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:28.175 10:19:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2206378 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:28.740 00:06:28.740 real 0m0.948s 00:06:28.740 user 0m0.012s 00:06:28.740 sys 0m0.120s 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:28.740 ************************************ 00:06:28.740 END TEST filesystem_btrfs 00:06:28.740 ************************************ 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.740 ************************************ 00:06:28.740 START TEST filesystem_xfs 00:06:28.740 ************************************ 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:28.740 10:19:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:28.997 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:28.997 = sectsz=512 attr=2, projid32bit=1 00:06:28.997 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:28.997 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:28.997 data = bsize=4096 blocks=130560, imaxpct=25 00:06:28.997 = sunit=0 swidth=0 blks 00:06:28.997 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:28.997 log =internal log bsize=4096 blocks=16384, version=2 00:06:28.997 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:28.997 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:29.980 Discarding blocks...Done. 00:06:29.980 10:19:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:29.980 10:19:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2206378 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:32.505 00:06:32.505 real 0m3.501s 00:06:32.505 user 0m0.022s 00:06:32.505 sys 0m0.051s 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:32.505 ************************************ 00:06:32.505 END TEST filesystem_xfs 00:06:32.505 ************************************ 00:06:32.505 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:32.506 10:19:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:32.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2206378 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2206378 ']' 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2206378 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2206378 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2206378' 00:06:32.763 killing process with pid 2206378 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2206378 00:06:32.763 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2206378 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:33.325 00:06:33.325 real 0m13.032s 00:06:33.325 user 0m50.071s 00:06:33.325 sys 0m1.894s 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:33.325 ************************************ 00:06:33.325 END TEST nvmf_filesystem_no_in_capsule 00:06:33.325 ************************************ 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.325 ************************************ 00:06:33.325 START TEST nvmf_filesystem_in_capsule 00:06:33.325 ************************************ 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2208079 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2208079 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2208079 ']' 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.325 10:19:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:33.325 [2024-07-15 10:19:27.926210] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:33.325 [2024-07-15 10:19:27.926295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.325 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.583 [2024-07-15 10:19:27.993205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.583 [2024-07-15 10:19:28.115488] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.583 [2024-07-15 10:19:28.115552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.583 [2024-07-15 10:19:28.115576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.583 [2024-07-15 10:19:28.115590] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.583 [2024-07-15 10:19:28.115601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.583 [2024-07-15 10:19:28.115680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.583 [2024-07-15 10:19:28.115714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.583 [2024-07-15 10:19:28.115767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.583 [2024-07-15 10:19:28.115770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.514 [2024-07-15 10:19:28.953242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.514 10:19:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.514 Malloc1 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.514 [2024-07-15 10:19:29.137557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:34.514 { 00:06:34.514 "name": "Malloc1", 00:06:34.514 "aliases": [ 00:06:34.514 "fdfebf3c-8023-4c71-ac6e-11a626d6ac52" 00:06:34.514 ], 00:06:34.514 "product_name": "Malloc disk", 00:06:34.514 "block_size": 512, 00:06:34.514 "num_blocks": 1048576, 00:06:34.514 "uuid": "fdfebf3c-8023-4c71-ac6e-11a626d6ac52", 00:06:34.514 "assigned_rate_limits": { 00:06:34.514 "rw_ios_per_sec": 0, 00:06:34.514 "rw_mbytes_per_sec": 0, 00:06:34.514 "r_mbytes_per_sec": 0, 00:06:34.514 "w_mbytes_per_sec": 0 00:06:34.514 }, 00:06:34.514 "claimed": true, 00:06:34.514 "claim_type": "exclusive_write", 00:06:34.514 "zoned": false, 00:06:34.514 "supported_io_types": { 00:06:34.514 "read": true, 00:06:34.514 "write": true, 00:06:34.514 "unmap": true, 00:06:34.514 "flush": true, 00:06:34.514 "reset": true, 00:06:34.514 "nvme_admin": false, 00:06:34.514 "nvme_io": false, 00:06:34.514 "nvme_io_md": false, 00:06:34.514 "write_zeroes": true, 00:06:34.514 "zcopy": true, 00:06:34.514 "get_zone_info": false, 00:06:34.514 "zone_management": false, 00:06:34.514 "zone_append": false, 00:06:34.514 "compare": false, 00:06:34.514 "compare_and_write": false, 00:06:34.514 "abort": true, 00:06:34.514 "seek_hole": false, 00:06:34.514 "seek_data": false, 00:06:34.514 "copy": true, 00:06:34.514 "nvme_iov_md": false 00:06:34.514 }, 00:06:34.514 "memory_domains": [ 00:06:34.514 { 00:06:34.514 "dma_device_id": "system", 00:06:34.514 "dma_device_type": 1 00:06:34.514 }, 00:06:34.514 { 00:06:34.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.514 "dma_device_type": 2 00:06:34.514 } 00:06:34.514 ], 00:06:34.514 "driver_specific": {} 00:06:34.514 } 00:06:34.514 ]' 00:06:34.514 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:34.772 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:34.772 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:34.772 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:34.772 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:34.772 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:34.772 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:34.772 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:35.335 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:35.335 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:35.335 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:35.335 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:35.335 10:19:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:37.228 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:37.228 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:37.228 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:37.228 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:37.228 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:37.228 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:37.228 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:37.228 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:37.228 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:37.507 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:37.507 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:37.507 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:37.507 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:37.507 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:37.507 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:37.507 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:37.507 10:19:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:37.507 10:19:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:38.438 10:19:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.370 ************************************ 00:06:39.370 START TEST filesystem_in_capsule_ext4 00:06:39.370 ************************************ 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:39.370 10:19:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:39.370 mke2fs 1.46.5 (30-Dec-2021) 00:06:39.370 Discarding device blocks: 0/522240 done 00:06:39.370 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:39.370 Filesystem UUID: 60029e5b-fbb8-4419-9aaa-99b07a9cda68 00:06:39.370 Superblock backups stored on blocks: 00:06:39.370 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:39.370 00:06:39.370 Allocating group tables: 0/64 done 00:06:39.370 Writing inode tables: 0/64 done 00:06:39.627 Creating journal (8192 blocks): done 00:06:40.560 Writing superblocks and filesystem accounting information: 0/64 done 00:06:40.560 00:06:40.560 10:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:40.560 10:19:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2208079 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.126 00:06:41.126 real 0m1.923s 00:06:41.126 user 0m0.015s 00:06:41.126 sys 0m0.052s 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:41.126 ************************************ 00:06:41.126 END TEST filesystem_in_capsule_ext4 00:06:41.126 ************************************ 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.126 ************************************ 00:06:41.126 START TEST filesystem_in_capsule_btrfs 00:06:41.126 ************************************ 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:41.126 10:19:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:41.691 btrfs-progs v6.6.2 00:06:41.691 See https://btrfs.readthedocs.io for more information. 00:06:41.691 00:06:41.691 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:41.691 NOTE: several default settings have changed in version 5.15, please make sure 00:06:41.691 this does not affect your deployments: 00:06:41.691 - DUP for metadata (-m dup) 00:06:41.691 - enabled no-holes (-O no-holes) 00:06:41.691 - enabled free-space-tree (-R free-space-tree) 00:06:41.691 00:06:41.691 Label: (null) 00:06:41.691 UUID: 90df5c9f-7924-42a3-af6b-b59302cdb74f 00:06:41.691 Node size: 16384 00:06:41.691 Sector size: 4096 00:06:41.691 Filesystem size: 510.00MiB 00:06:41.691 Block group profiles: 00:06:41.691 Data: single 8.00MiB 00:06:41.691 Metadata: DUP 32.00MiB 00:06:41.691 System: DUP 8.00MiB 00:06:41.691 SSD detected: yes 00:06:41.691 Zoned device: no 00:06:41.691 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:41.691 Runtime features: free-space-tree 00:06:41.691 Checksum: crc32c 00:06:41.691 Number of devices: 1 00:06:41.691 Devices: 00:06:41.691 ID SIZE PATH 00:06:41.691 1 510.00MiB /dev/nvme0n1p1 00:06:41.691 00:06:41.691 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:41.691 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.691 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.691 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:41.691 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.691 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:41.691 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:41.691 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.691 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2208079 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.950 00:06:41.950 real 0m0.580s 00:06:41.950 user 0m0.009s 00:06:41.950 sys 0m0.115s 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:41.950 ************************************ 00:06:41.950 END TEST filesystem_in_capsule_btrfs 00:06:41.950 ************************************ 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.950 ************************************ 00:06:41.950 START TEST filesystem_in_capsule_xfs 00:06:41.950 ************************************ 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:41.950 10:19:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:41.950 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:41.950 = sectsz=512 attr=2, projid32bit=1 00:06:41.950 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:41.950 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:41.950 data = bsize=4096 blocks=130560, imaxpct=25 00:06:41.950 = sunit=0 swidth=0 blks 00:06:41.950 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:41.950 log =internal log bsize=4096 blocks=16384, version=2 00:06:41.950 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:41.950 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:42.882 Discarding blocks...Done. 00:06:42.882 10:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:42.882 10:19:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2208079 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:44.780 10:19:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:44.780 00:06:44.780 real 0m2.611s 00:06:44.780 user 0m0.010s 00:06:44.780 sys 0m0.068s 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:44.780 ************************************ 00:06:44.780 END TEST filesystem_in_capsule_xfs 00:06:44.780 ************************************ 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:44.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2208079 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2208079 ']' 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2208079 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2208079 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.780 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2208079' 00:06:44.780 killing process with pid 2208079 00:06:44.781 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2208079 00:06:44.781 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2208079 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:45.349 00:06:45.349 real 0m11.823s 00:06:45.349 user 0m45.393s 00:06:45.349 sys 0m1.767s 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.349 ************************************ 00:06:45.349 END TEST nvmf_filesystem_in_capsule 00:06:45.349 ************************************ 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:45.349 rmmod nvme_tcp 00:06:45.349 rmmod nvme_fabrics 00:06:45.349 rmmod nvme_keyring 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.349 10:19:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.291 10:19:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:47.291 00:06:47.291 real 0m29.393s 00:06:47.291 user 1m36.397s 00:06:47.291 sys 0m5.281s 00:06:47.291 10:19:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.291 10:19:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:47.291 ************************************ 00:06:47.291 END TEST nvmf_filesystem 00:06:47.291 ************************************ 00:06:47.291 10:19:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:47.291 10:19:41 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:47.291 10:19:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.291 10:19:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.291 10:19:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.291 ************************************ 00:06:47.291 START TEST nvmf_target_discovery 00:06:47.291 ************************************ 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:47.291 * Looking for test storage... 00:06:47.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:47.291 10:19:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:49.826 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:49.826 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.826 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:49.827 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:49.827 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.827 10:19:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:49.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:06:49.827 00:06:49.827 --- 10.0.0.2 ping statistics --- 00:06:49.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.827 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:06:49.827 00:06:49.827 --- 10.0.0.1 ping statistics --- 00:06:49.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.827 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2211680 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2211680 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2211680 ']' 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.827 10:19:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:49.827 [2024-07-15 10:19:44.176567] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:49.827 [2024-07-15 10:19:44.176652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.827 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.827 [2024-07-15 10:19:44.243590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.827 [2024-07-15 10:19:44.364531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.827 [2024-07-15 10:19:44.364582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.827 [2024-07-15 10:19:44.364598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.827 [2024-07-15 10:19:44.364618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.827 [2024-07-15 10:19:44.364630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.827 [2024-07-15 10:19:44.364714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.827 [2024-07-15 10:19:44.364766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.827 [2024-07-15 10:19:44.364815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.827 [2024-07-15 10:19:44.364818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 [2024-07-15 10:19:45.188193] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 Null1 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 [2024-07-15 10:19:45.228456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 Null2 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 Null3 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 Null4 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:50.761 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.762 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:51.020 00:06:51.020 Discovery Log Number of Records 6, Generation counter 6 00:06:51.020 =====Discovery Log Entry 0====== 00:06:51.020 trtype: tcp 00:06:51.020 adrfam: ipv4 00:06:51.020 subtype: current discovery subsystem 00:06:51.020 treq: not required 00:06:51.020 portid: 0 00:06:51.020 trsvcid: 4420 00:06:51.020 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:51.020 traddr: 10.0.0.2 00:06:51.020 eflags: explicit discovery connections, duplicate discovery information 00:06:51.020 sectype: none 00:06:51.020 =====Discovery Log Entry 1====== 00:06:51.020 trtype: tcp 00:06:51.020 adrfam: ipv4 00:06:51.020 subtype: nvme subsystem 00:06:51.020 treq: not required 00:06:51.020 portid: 0 00:06:51.020 trsvcid: 4420 00:06:51.020 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:51.020 traddr: 10.0.0.2 00:06:51.020 eflags: none 00:06:51.020 sectype: none 00:06:51.020 =====Discovery Log Entry 2====== 00:06:51.020 trtype: tcp 00:06:51.020 adrfam: ipv4 00:06:51.020 subtype: nvme subsystem 00:06:51.020 treq: not required 00:06:51.020 portid: 0 00:06:51.020 trsvcid: 4420 00:06:51.020 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:51.020 traddr: 10.0.0.2 00:06:51.020 eflags: none 00:06:51.020 sectype: none 00:06:51.020 =====Discovery Log Entry 3====== 00:06:51.020 trtype: tcp 00:06:51.020 adrfam: ipv4 00:06:51.020 subtype: nvme subsystem 00:06:51.020 treq: not required 00:06:51.020 portid: 0 00:06:51.020 trsvcid: 4420 00:06:51.020 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:51.020 traddr: 10.0.0.2 00:06:51.020 eflags: none 00:06:51.020 sectype: none 00:06:51.020 =====Discovery Log Entry 4====== 00:06:51.020 trtype: tcp 00:06:51.020 adrfam: ipv4 00:06:51.020 subtype: nvme subsystem 00:06:51.020 treq: not required 00:06:51.020 portid: 0 00:06:51.020 trsvcid: 4420 00:06:51.020 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:51.020 traddr: 10.0.0.2 00:06:51.020 eflags: none 00:06:51.020 sectype: none 00:06:51.020 =====Discovery Log Entry 5====== 00:06:51.020 trtype: tcp 00:06:51.020 adrfam: ipv4 00:06:51.020 subtype: discovery subsystem referral 00:06:51.020 treq: not required 00:06:51.020 portid: 0 00:06:51.020 trsvcid: 4430 00:06:51.020 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:51.020 traddr: 10.0.0.2 00:06:51.020 eflags: none 00:06:51.020 sectype: none 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:51.020 Perform nvmf subsystem discovery via RPC 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.020 [ 00:06:51.020 { 00:06:51.020 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:51.020 "subtype": "Discovery", 00:06:51.020 "listen_addresses": [ 00:06:51.020 { 00:06:51.020 "trtype": "TCP", 00:06:51.020 "adrfam": "IPv4", 00:06:51.020 "traddr": "10.0.0.2", 00:06:51.020 "trsvcid": "4420" 00:06:51.020 } 00:06:51.020 ], 00:06:51.020 "allow_any_host": true, 00:06:51.020 "hosts": [] 00:06:51.020 }, 00:06:51.020 { 00:06:51.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:51.020 "subtype": "NVMe", 00:06:51.020 "listen_addresses": [ 00:06:51.020 { 00:06:51.020 "trtype": "TCP", 00:06:51.020 "adrfam": "IPv4", 00:06:51.020 "traddr": "10.0.0.2", 00:06:51.020 "trsvcid": "4420" 00:06:51.020 } 00:06:51.020 ], 00:06:51.020 "allow_any_host": true, 00:06:51.020 "hosts": [], 00:06:51.020 "serial_number": "SPDK00000000000001", 00:06:51.020 "model_number": "SPDK bdev Controller", 00:06:51.020 "max_namespaces": 32, 00:06:51.020 "min_cntlid": 1, 00:06:51.020 "max_cntlid": 65519, 00:06:51.020 "namespaces": [ 00:06:51.020 { 00:06:51.020 "nsid": 1, 00:06:51.020 "bdev_name": "Null1", 00:06:51.020 "name": "Null1", 00:06:51.020 "nguid": "E9563E4C2D6141CA8B8CDACCAB6F1AC9", 00:06:51.020 "uuid": "e9563e4c-2d61-41ca-8b8c-daccab6f1ac9" 00:06:51.020 } 00:06:51.020 ] 00:06:51.020 }, 00:06:51.020 { 00:06:51.020 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:51.020 "subtype": "NVMe", 00:06:51.020 "listen_addresses": [ 00:06:51.020 { 00:06:51.020 "trtype": "TCP", 00:06:51.020 "adrfam": "IPv4", 00:06:51.020 "traddr": "10.0.0.2", 00:06:51.020 "trsvcid": "4420" 00:06:51.020 } 00:06:51.020 ], 00:06:51.020 "allow_any_host": true, 00:06:51.020 "hosts": [], 00:06:51.020 "serial_number": "SPDK00000000000002", 00:06:51.020 "model_number": "SPDK bdev Controller", 00:06:51.020 "max_namespaces": 32, 00:06:51.020 "min_cntlid": 1, 00:06:51.020 "max_cntlid": 65519, 00:06:51.020 "namespaces": [ 00:06:51.020 { 00:06:51.020 "nsid": 1, 00:06:51.020 "bdev_name": "Null2", 00:06:51.020 "name": "Null2", 00:06:51.020 "nguid": "8C7FD108A3D54504BF7BC4625E47CF41", 00:06:51.020 "uuid": "8c7fd108-a3d5-4504-bf7b-c4625e47cf41" 00:06:51.020 } 00:06:51.020 ] 00:06:51.020 }, 00:06:51.020 { 00:06:51.020 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:51.020 "subtype": "NVMe", 00:06:51.020 "listen_addresses": [ 00:06:51.020 { 00:06:51.020 "trtype": "TCP", 00:06:51.020 "adrfam": "IPv4", 00:06:51.020 "traddr": "10.0.0.2", 00:06:51.020 "trsvcid": "4420" 00:06:51.020 } 00:06:51.020 ], 00:06:51.020 "allow_any_host": true, 00:06:51.020 "hosts": [], 00:06:51.020 "serial_number": "SPDK00000000000003", 00:06:51.020 "model_number": "SPDK bdev Controller", 00:06:51.020 "max_namespaces": 32, 00:06:51.020 "min_cntlid": 1, 00:06:51.020 "max_cntlid": 65519, 00:06:51.020 "namespaces": [ 00:06:51.020 { 00:06:51.020 "nsid": 1, 00:06:51.020 "bdev_name": "Null3", 00:06:51.020 "name": "Null3", 00:06:51.020 "nguid": "7CB8FCEE91EF4B82A973596AE739D063", 00:06:51.020 "uuid": "7cb8fcee-91ef-4b82-a973-596ae739d063" 00:06:51.020 } 00:06:51.020 ] 00:06:51.020 }, 00:06:51.020 { 00:06:51.020 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:51.020 "subtype": "NVMe", 00:06:51.020 "listen_addresses": [ 00:06:51.020 { 00:06:51.020 "trtype": "TCP", 00:06:51.020 "adrfam": "IPv4", 00:06:51.020 "traddr": "10.0.0.2", 00:06:51.020 "trsvcid": "4420" 00:06:51.020 } 00:06:51.020 ], 00:06:51.020 "allow_any_host": true, 00:06:51.020 "hosts": [], 00:06:51.020 "serial_number": "SPDK00000000000004", 00:06:51.020 "model_number": "SPDK bdev Controller", 00:06:51.020 "max_namespaces": 32, 00:06:51.020 "min_cntlid": 1, 00:06:51.020 "max_cntlid": 65519, 00:06:51.020 "namespaces": [ 00:06:51.020 { 00:06:51.020 "nsid": 1, 00:06:51.020 "bdev_name": "Null4", 00:06:51.020 "name": "Null4", 00:06:51.020 "nguid": "3C981798D1DE45CF89D310F56F95D901", 00:06:51.020 "uuid": "3c981798-d1de-45cf-89d3-10f56f95d901" 00:06:51.020 } 00:06:51.020 ] 00:06:51.020 } 00:06:51.020 ] 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.020 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:51.021 rmmod nvme_tcp 00:06:51.021 rmmod nvme_fabrics 00:06:51.021 rmmod nvme_keyring 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2211680 ']' 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2211680 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2211680 ']' 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2211680 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2211680 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2211680' 00:06:51.021 killing process with pid 2211680 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2211680 00:06:51.021 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2211680 00:06:51.587 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:51.587 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:51.587 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:51.587 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:51.587 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:51.588 10:19:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.588 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.588 10:19:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.493 10:19:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:53.493 00:06:53.493 real 0m6.122s 00:06:53.493 user 0m7.164s 00:06:53.493 sys 0m1.865s 00:06:53.493 10:19:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.493 10:19:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:53.493 ************************************ 00:06:53.493 END TEST nvmf_target_discovery 00:06:53.493 ************************************ 00:06:53.493 10:19:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:53.493 10:19:48 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:53.493 10:19:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:53.493 10:19:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.493 10:19:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.493 ************************************ 00:06:53.493 START TEST nvmf_referrals 00:06:53.493 ************************************ 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:53.493 * Looking for test storage... 00:06:53.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.493 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:53.494 10:19:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:55.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:55.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:55.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.398 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:55.399 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.399 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:55.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:06:55.658 00:06:55.658 --- 10.0.0.2 ping statistics --- 00:06:55.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.658 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:06:55.658 00:06:55.658 --- 10.0.0.1 ping statistics --- 00:06:55.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.658 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2213782 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2213782 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2213782 ']' 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.658 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:55.658 [2024-07-15 10:19:50.225094] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:55.658 [2024-07-15 10:19:50.225179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.658 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.658 [2024-07-15 10:19:50.297530] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.938 [2024-07-15 10:19:50.423349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.938 [2024-07-15 10:19:50.423418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.938 [2024-07-15 10:19:50.423434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.938 [2024-07-15 10:19:50.423447] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.938 [2024-07-15 10:19:50.423459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.938 [2024-07-15 10:19:50.423519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.938 [2024-07-15 10:19:50.423576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.938 [2024-07-15 10:19:50.423602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.938 [2024-07-15 10:19:50.423606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.938 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.938 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:55.938 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.938 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.938 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 10:19:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.938 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.938 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.938 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 [2024-07-15 10:19:50.585853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.195 [2024-07-15 10:19:50.598112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:56.195 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:56.196 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:56.453 10:19:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:56.453 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:56.711 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:56.967 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:56.967 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:56.967 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:56.967 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:56.967 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:56.967 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.224 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:57.481 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:57.481 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:57.481 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:57.481 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:57.481 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.481 10:19:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:57.481 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:57.481 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:57.481 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.481 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.481 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.481 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.481 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.481 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:57.481 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.481 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:57.779 rmmod nvme_tcp 00:06:57.779 rmmod nvme_fabrics 00:06:57.779 rmmod nvme_keyring 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:57.779 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2213782 ']' 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2213782 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2213782 ']' 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2213782 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213782 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213782' 00:06:57.780 killing process with pid 2213782 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2213782 00:06:57.780 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2213782 00:06:58.036 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:58.036 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:58.036 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:58.036 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:58.036 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:58.036 10:19:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.036 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.036 10:19:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.570 10:19:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:00.570 00:07:00.570 real 0m6.637s 00:07:00.570 user 0m10.008s 00:07:00.570 sys 0m2.075s 00:07:00.570 10:19:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.570 10:19:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.570 ************************************ 00:07:00.570 END TEST nvmf_referrals 00:07:00.570 ************************************ 00:07:00.570 10:19:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:00.570 10:19:54 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:00.570 10:19:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:00.570 10:19:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.570 10:19:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.570 ************************************ 00:07:00.570 START TEST nvmf_connect_disconnect 00:07:00.570 ************************************ 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:00.570 * Looking for test storage... 00:07:00.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.570 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:00.571 10:19:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.471 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:02.472 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:02.472 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:02.472 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:02.472 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:02.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:07:02.472 00:07:02.472 --- 10.0.0.2 ping statistics --- 00:07:02.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.472 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:07:02.472 00:07:02.472 --- 10.0.0.1 ping statistics --- 00:07:02.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.472 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2216072 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2216072 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2216072 ']' 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.472 10:19:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.472 [2024-07-15 10:19:56.973210] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:02.472 [2024-07-15 10:19:56.973319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.472 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.473 [2024-07-15 10:19:57.039039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.731 [2024-07-15 10:19:57.152066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.731 [2024-07-15 10:19:57.152121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.731 [2024-07-15 10:19:57.152151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.731 [2024-07-15 10:19:57.152164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.731 [2024-07-15 10:19:57.152174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.731 [2024-07-15 10:19:57.152240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.731 [2024-07-15 10:19:57.152271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.731 [2024-07-15 10:19:57.152330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.731 [2024-07-15 10:19:57.152332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.731 [2024-07-15 10:19:57.315771] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.731 [2024-07-15 10:19:57.373228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:02.731 10:19:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:06.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:08.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:16.888 rmmod nvme_tcp 00:07:16.888 rmmod nvme_fabrics 00:07:16.888 rmmod nvme_keyring 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2216072 ']' 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2216072 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2216072 ']' 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2216072 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216072 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216072' 00:07:16.888 killing process with pid 2216072 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2216072 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2216072 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.888 10:20:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.416 10:20:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:19.416 00:07:19.416 real 0m18.744s 00:07:19.416 user 0m56.390s 00:07:19.416 sys 0m3.278s 00:07:19.417 10:20:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.417 10:20:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:19.417 ************************************ 00:07:19.417 END TEST nvmf_connect_disconnect 00:07:19.417 ************************************ 00:07:19.417 10:20:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:19.417 10:20:13 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:19.417 10:20:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:19.417 10:20:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.417 10:20:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.417 ************************************ 00:07:19.417 START TEST nvmf_multitarget 00:07:19.417 ************************************ 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:19.417 * Looking for test storage... 00:07:19.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:19.417 10:20:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:21.377 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:21.378 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:21.378 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:21.378 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:21.378 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.378 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:21.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:07:21.379 00:07:21.379 --- 10.0.0.2 ping statistics --- 00:07:21.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.379 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:07:21.379 00:07:21.379 --- 10.0.0.1 ping statistics --- 00:07:21.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.379 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2219735 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2219735 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2219735 ']' 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.379 10:20:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:21.379 [2024-07-15 10:20:15.871549] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:21.379 [2024-07-15 10:20:15.871648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.379 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.379 [2024-07-15 10:20:15.947491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.637 [2024-07-15 10:20:16.072610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.637 [2024-07-15 10:20:16.072670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.637 [2024-07-15 10:20:16.072686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.637 [2024-07-15 10:20:16.072699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.637 [2024-07-15 10:20:16.072711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.637 [2024-07-15 10:20:16.072793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.637 [2024-07-15 10:20:16.072826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.637 [2024-07-15 10:20:16.072888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.637 [2024-07-15 10:20:16.072896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.637 10:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.637 10:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:21.637 10:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:21.637 10:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.637 10:20:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:21.637 10:20:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.637 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:21.637 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:21.637 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:21.895 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:21.895 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:21.895 "nvmf_tgt_1" 00:07:21.895 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:22.153 "nvmf_tgt_2" 00:07:22.153 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:22.153 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:22.153 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:22.153 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:22.153 true 00:07:22.411 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:22.411 true 00:07:22.411 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:22.411 10:20:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:22.411 10:20:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:22.411 10:20:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:22.411 10:20:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:22.411 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:22.411 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:22.411 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:22.411 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:22.411 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:22.411 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:22.411 rmmod nvme_tcp 00:07:22.411 rmmod nvme_fabrics 00:07:22.411 rmmod nvme_keyring 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2219735 ']' 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2219735 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2219735 ']' 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2219735 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2219735 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2219735' 00:07:22.669 killing process with pid 2219735 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2219735 00:07:22.669 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2219735 00:07:22.929 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:22.929 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:22.929 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:22.929 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:22.929 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:22.929 10:20:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.929 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.929 10:20:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.892 10:20:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:24.892 00:07:24.892 real 0m5.899s 00:07:24.892 user 0m6.655s 00:07:24.892 sys 0m1.966s 00:07:24.892 10:20:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.892 10:20:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:24.892 ************************************ 00:07:24.892 END TEST nvmf_multitarget 00:07:24.892 ************************************ 00:07:24.892 10:20:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:24.892 10:20:19 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:24.892 10:20:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.892 10:20:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.892 10:20:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.892 ************************************ 00:07:24.892 START TEST nvmf_rpc 00:07:24.892 ************************************ 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:24.892 * Looking for test storage... 00:07:24.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:24.892 10:20:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:27.424 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:27.424 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:27.424 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:27.424 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.424 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:27.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:07:27.425 00:07:27.425 --- 10.0.0.2 ping statistics --- 00:07:27.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.425 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:07:27.425 00:07:27.425 --- 10.0.0.1 ping statistics --- 00:07:27.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.425 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2221836 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2221836 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2221836 ']' 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.425 10:20:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.425 [2024-07-15 10:20:21.663679] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:27.425 [2024-07-15 10:20:21.663760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.425 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.425 [2024-07-15 10:20:21.733887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.425 [2024-07-15 10:20:21.857802] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.425 [2024-07-15 10:20:21.857854] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.425 [2024-07-15 10:20:21.857871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.425 [2024-07-15 10:20:21.857892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.425 [2024-07-15 10:20:21.857905] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.425 [2024-07-15 10:20:21.861900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.425 [2024-07-15 10:20:21.861948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.425 [2024-07-15 10:20:21.862000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.425 [2024-07-15 10:20:21.862005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:28.358 "tick_rate": 2700000000, 00:07:28.358 "poll_groups": [ 00:07:28.358 { 00:07:28.358 "name": "nvmf_tgt_poll_group_000", 00:07:28.358 "admin_qpairs": 0, 00:07:28.358 "io_qpairs": 0, 00:07:28.358 "current_admin_qpairs": 0, 00:07:28.358 "current_io_qpairs": 0, 00:07:28.358 "pending_bdev_io": 0, 00:07:28.358 "completed_nvme_io": 0, 00:07:28.358 "transports": [] 00:07:28.358 }, 00:07:28.358 { 00:07:28.358 "name": "nvmf_tgt_poll_group_001", 00:07:28.358 "admin_qpairs": 0, 00:07:28.358 "io_qpairs": 0, 00:07:28.358 "current_admin_qpairs": 0, 00:07:28.358 "current_io_qpairs": 0, 00:07:28.358 "pending_bdev_io": 0, 00:07:28.358 "completed_nvme_io": 0, 00:07:28.358 "transports": [] 00:07:28.358 }, 00:07:28.358 { 00:07:28.358 "name": "nvmf_tgt_poll_group_002", 00:07:28.358 "admin_qpairs": 0, 00:07:28.358 "io_qpairs": 0, 00:07:28.358 "current_admin_qpairs": 0, 00:07:28.358 "current_io_qpairs": 0, 00:07:28.358 "pending_bdev_io": 0, 00:07:28.358 "completed_nvme_io": 0, 00:07:28.358 "transports": [] 00:07:28.358 }, 00:07:28.358 { 00:07:28.358 "name": "nvmf_tgt_poll_group_003", 00:07:28.358 "admin_qpairs": 0, 00:07:28.358 "io_qpairs": 0, 00:07:28.358 "current_admin_qpairs": 0, 00:07:28.358 "current_io_qpairs": 0, 00:07:28.358 "pending_bdev_io": 0, 00:07:28.358 "completed_nvme_io": 0, 00:07:28.358 "transports": [] 00:07:28.358 } 00:07:28.358 ] 00:07:28.358 }' 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:28.358 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 [2024-07-15 10:20:22.791373] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:28.359 "tick_rate": 2700000000, 00:07:28.359 "poll_groups": [ 00:07:28.359 { 00:07:28.359 "name": "nvmf_tgt_poll_group_000", 00:07:28.359 "admin_qpairs": 0, 00:07:28.359 "io_qpairs": 0, 00:07:28.359 "current_admin_qpairs": 0, 00:07:28.359 "current_io_qpairs": 0, 00:07:28.359 "pending_bdev_io": 0, 00:07:28.359 "completed_nvme_io": 0, 00:07:28.359 "transports": [ 00:07:28.359 { 00:07:28.359 "trtype": "TCP" 00:07:28.359 } 00:07:28.359 ] 00:07:28.359 }, 00:07:28.359 { 00:07:28.359 "name": "nvmf_tgt_poll_group_001", 00:07:28.359 "admin_qpairs": 0, 00:07:28.359 "io_qpairs": 0, 00:07:28.359 "current_admin_qpairs": 0, 00:07:28.359 "current_io_qpairs": 0, 00:07:28.359 "pending_bdev_io": 0, 00:07:28.359 "completed_nvme_io": 0, 00:07:28.359 "transports": [ 00:07:28.359 { 00:07:28.359 "trtype": "TCP" 00:07:28.359 } 00:07:28.359 ] 00:07:28.359 }, 00:07:28.359 { 00:07:28.359 "name": "nvmf_tgt_poll_group_002", 00:07:28.359 "admin_qpairs": 0, 00:07:28.359 "io_qpairs": 0, 00:07:28.359 "current_admin_qpairs": 0, 00:07:28.359 "current_io_qpairs": 0, 00:07:28.359 "pending_bdev_io": 0, 00:07:28.359 "completed_nvme_io": 0, 00:07:28.359 "transports": [ 00:07:28.359 { 00:07:28.359 "trtype": "TCP" 00:07:28.359 } 00:07:28.359 ] 00:07:28.359 }, 00:07:28.359 { 00:07:28.359 "name": "nvmf_tgt_poll_group_003", 00:07:28.359 "admin_qpairs": 0, 00:07:28.359 "io_qpairs": 0, 00:07:28.359 "current_admin_qpairs": 0, 00:07:28.359 "current_io_qpairs": 0, 00:07:28.359 "pending_bdev_io": 0, 00:07:28.359 "completed_nvme_io": 0, 00:07:28.359 "transports": [ 00:07:28.359 { 00:07:28.359 "trtype": "TCP" 00:07:28.359 } 00:07:28.359 ] 00:07:28.359 } 00:07:28.359 ] 00:07:28.359 }' 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 Malloc1 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 [2024-07-15 10:20:22.940622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:28.359 [2024-07-15 10:20:22.963110] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:28.359 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:28.359 could not add new controller: failed to write to nvme-fabrics device 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.359 10:20:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.291 10:20:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.291 10:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:29.291 10:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.291 10:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:29.291 10:20:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.187 [2024-07-15 10:20:25.741897] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:31.187 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:31.187 could not add new controller: failed to write to nvme-fabrics device 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.187 10:20:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.188 10:20:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.120 10:20:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.120 10:20:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:32.120 10:20:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.120 10:20:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:32.120 10:20:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.017 [2024-07-15 10:20:28.568970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.017 10:20:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.951 10:20:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:34.951 10:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:34.951 10:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:34.951 10:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:34.951 10:20:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:36.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 [2024-07-15 10:20:31.384027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:36.849 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.850 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.850 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.850 10:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:36.850 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.850 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.850 10:20:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.850 10:20:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:37.782 10:20:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:37.782 10:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:37.782 10:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:37.782 10:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:37.782 10:20:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:39.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.678 [2024-07-15 10:20:34.229373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.678 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:40.243 10:20:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:40.243 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:40.243 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:40.243 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:40.243 10:20:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.805 [2024-07-15 10:20:36.993535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.805 10:20:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.805 10:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.805 10:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:42.805 10:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.805 10:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.805 10:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.805 10:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.063 10:20:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:43.063 10:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:43.063 10:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:43.063 10:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:43.063 10:20:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:44.960 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:44.960 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:44.960 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.217 [2024-07-15 10:20:39.730976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.217 10:20:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.782 10:20:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.782 10:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:45.782 10:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.782 10:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:45.782 10:20:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 [2024-07-15 10:20:42.476773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 [2024-07-15 10:20:42.524821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 [2024-07-15 10:20:42.573000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 [2024-07-15 10:20:42.621144] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.307 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.308 [2024-07-15 10:20:42.669341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:48.308 "tick_rate": 2700000000, 00:07:48.308 "poll_groups": [ 00:07:48.308 { 00:07:48.308 "name": "nvmf_tgt_poll_group_000", 00:07:48.308 "admin_qpairs": 2, 00:07:48.308 "io_qpairs": 84, 00:07:48.308 "current_admin_qpairs": 0, 00:07:48.308 "current_io_qpairs": 0, 00:07:48.308 "pending_bdev_io": 0, 00:07:48.308 "completed_nvme_io": 178, 00:07:48.308 "transports": [ 00:07:48.308 { 00:07:48.308 "trtype": "TCP" 00:07:48.308 } 00:07:48.308 ] 00:07:48.308 }, 00:07:48.308 { 00:07:48.308 "name": "nvmf_tgt_poll_group_001", 00:07:48.308 "admin_qpairs": 2, 00:07:48.308 "io_qpairs": 84, 00:07:48.308 "current_admin_qpairs": 0, 00:07:48.308 "current_io_qpairs": 0, 00:07:48.308 "pending_bdev_io": 0, 00:07:48.308 "completed_nvme_io": 111, 00:07:48.308 "transports": [ 00:07:48.308 { 00:07:48.308 "trtype": "TCP" 00:07:48.308 } 00:07:48.308 ] 00:07:48.308 }, 00:07:48.308 { 00:07:48.308 "name": "nvmf_tgt_poll_group_002", 00:07:48.308 "admin_qpairs": 1, 00:07:48.308 "io_qpairs": 84, 00:07:48.308 "current_admin_qpairs": 0, 00:07:48.308 "current_io_qpairs": 0, 00:07:48.308 "pending_bdev_io": 0, 00:07:48.308 "completed_nvme_io": 206, 00:07:48.308 "transports": [ 00:07:48.308 { 00:07:48.308 "trtype": "TCP" 00:07:48.308 } 00:07:48.308 ] 00:07:48.308 }, 00:07:48.308 { 00:07:48.308 "name": "nvmf_tgt_poll_group_003", 00:07:48.308 "admin_qpairs": 2, 00:07:48.308 "io_qpairs": 84, 00:07:48.308 "current_admin_qpairs": 0, 00:07:48.308 "current_io_qpairs": 0, 00:07:48.308 "pending_bdev_io": 0, 00:07:48.308 "completed_nvme_io": 191, 00:07:48.308 "transports": [ 00:07:48.308 { 00:07:48.308 "trtype": "TCP" 00:07:48.308 } 00:07:48.308 ] 00:07:48.308 } 00:07:48.308 ] 00:07:48.308 }' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.308 rmmod nvme_tcp 00:07:48.308 rmmod nvme_fabrics 00:07:48.308 rmmod nvme_keyring 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2221836 ']' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2221836 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2221836 ']' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2221836 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221836 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221836' 00:07:48.308 killing process with pid 2221836 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2221836 00:07:48.308 10:20:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2221836 00:07:48.567 10:20:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.567 10:20:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.567 10:20:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.567 10:20:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.567 10:20:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.567 10:20:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.567 10:20:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.567 10:20:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.100 10:20:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.100 00:07:51.100 real 0m25.757s 00:07:51.100 user 1m24.546s 00:07:51.100 sys 0m4.003s 00:07:51.100 10:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.100 10:20:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.100 ************************************ 00:07:51.100 END TEST nvmf_rpc 00:07:51.100 ************************************ 00:07:51.100 10:20:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:51.100 10:20:45 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:51.100 10:20:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.100 10:20:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.100 10:20:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.100 ************************************ 00:07:51.100 START TEST nvmf_invalid 00:07:51.100 ************************************ 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:51.100 * Looking for test storage... 00:07:51.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:51.100 10:20:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.101 10:20:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:53.003 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:53.003 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:53.003 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:53.003 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:53.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:07:53.003 00:07:53.003 --- 10.0.0.2 ping statistics --- 00:07:53.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.003 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:07:53.003 00:07:53.003 --- 10.0.0.1 ping statistics --- 00:07:53.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.003 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.003 10:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.004 10:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:53.004 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2226463 00:07:53.004 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2226463 00:07:53.004 10:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2226463 ']' 00:07:53.004 10:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.004 10:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.004 10:20:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.004 10:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.004 10:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.004 10:20:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:53.004 [2024-07-15 10:20:47.565639] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:53.004 [2024-07-15 10:20:47.565727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.004 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.004 [2024-07-15 10:20:47.634444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.261 [2024-07-15 10:20:47.756338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.261 [2024-07-15 10:20:47.756395] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.261 [2024-07-15 10:20:47.756412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.261 [2024-07-15 10:20:47.756425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.261 [2024-07-15 10:20:47.756436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.261 [2024-07-15 10:20:47.756536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.261 [2024-07-15 10:20:47.756570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.261 [2024-07-15 10:20:47.756621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.261 [2024-07-15 10:20:47.756624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.193 10:20:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.193 10:20:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:54.193 10:20:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.193 10:20:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.193 10:20:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:54.193 10:20:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.193 10:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:54.193 10:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6427 00:07:54.193 [2024-07-15 10:20:48.782668] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:54.193 10:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:54.193 { 00:07:54.193 "nqn": "nqn.2016-06.io.spdk:cnode6427", 00:07:54.193 "tgt_name": "foobar", 00:07:54.193 "method": "nvmf_create_subsystem", 00:07:54.193 "req_id": 1 00:07:54.193 } 00:07:54.193 Got JSON-RPC error response 00:07:54.193 response: 00:07:54.193 { 00:07:54.194 "code": -32603, 00:07:54.194 "message": "Unable to find target foobar" 00:07:54.194 }' 00:07:54.194 10:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:54.194 { 00:07:54.194 "nqn": "nqn.2016-06.io.spdk:cnode6427", 00:07:54.194 "tgt_name": "foobar", 00:07:54.194 "method": "nvmf_create_subsystem", 00:07:54.194 "req_id": 1 00:07:54.194 } 00:07:54.194 Got JSON-RPC error response 00:07:54.194 response: 00:07:54.194 { 00:07:54.194 "code": -32603, 00:07:54.194 "message": "Unable to find target foobar" 00:07:54.194 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:54.194 10:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:54.194 10:20:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7350 00:07:54.451 [2024-07-15 10:20:49.047562] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7350: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:54.451 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:54.451 { 00:07:54.451 "nqn": "nqn.2016-06.io.spdk:cnode7350", 00:07:54.451 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:54.451 "method": "nvmf_create_subsystem", 00:07:54.451 "req_id": 1 00:07:54.451 } 00:07:54.451 Got JSON-RPC error response 00:07:54.451 response: 00:07:54.451 { 00:07:54.451 "code": -32602, 00:07:54.451 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:54.451 }' 00:07:54.451 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:54.451 { 00:07:54.451 "nqn": "nqn.2016-06.io.spdk:cnode7350", 00:07:54.451 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:54.451 "method": "nvmf_create_subsystem", 00:07:54.451 "req_id": 1 00:07:54.451 } 00:07:54.451 Got JSON-RPC error response 00:07:54.451 response: 00:07:54.451 { 00:07:54.451 "code": -32602, 00:07:54.451 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:54.451 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:54.451 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:54.451 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29547 00:07:54.709 [2024-07-15 10:20:49.304376] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29547: invalid model number 'SPDK_Controller' 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:54.709 { 00:07:54.709 "nqn": "nqn.2016-06.io.spdk:cnode29547", 00:07:54.709 "model_number": "SPDK_Controller\u001f", 00:07:54.709 "method": "nvmf_create_subsystem", 00:07:54.709 "req_id": 1 00:07:54.709 } 00:07:54.709 Got JSON-RPC error response 00:07:54.709 response: 00:07:54.709 { 00:07:54.709 "code": -32602, 00:07:54.709 "message": "Invalid MN SPDK_Controller\u001f" 00:07:54.709 }' 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:54.709 { 00:07:54.709 "nqn": "nqn.2016-06.io.spdk:cnode29547", 00:07:54.709 "model_number": "SPDK_Controller\u001f", 00:07:54.709 "method": "nvmf_create_subsystem", 00:07:54.709 "req_id": 1 00:07:54.709 } 00:07:54.709 Got JSON-RPC error response 00:07:54.709 response: 00:07:54.709 { 00:07:54.709 "code": -32602, 00:07:54.709 "message": "Invalid MN SPDK_Controller\u001f" 00:07:54.709 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:54.709 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.710 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.710 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:54.710 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:54.710 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:54.710 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.710 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.710 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:54.710 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:54.967 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:54.968 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:54.968 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:54.968 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:07:54.968 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Wop L~5(>zob738INq6%K' 00:07:54.968 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Wop L~5(>zob738INq6%K' nqn.2016-06.io.spdk:cnode6826 00:07:55.226 [2024-07-15 10:20:49.641468] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6826: invalid serial number 'Wop L~5(>zob738INq6%K' 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:55.226 { 00:07:55.226 "nqn": "nqn.2016-06.io.spdk:cnode6826", 00:07:55.226 "serial_number": "Wop L~5(>zob738INq6%K", 00:07:55.226 "method": "nvmf_create_subsystem", 00:07:55.226 "req_id": 1 00:07:55.226 } 00:07:55.226 Got JSON-RPC error response 00:07:55.226 response: 00:07:55.226 { 00:07:55.226 "code": -32602, 00:07:55.226 "message": "Invalid SN Wop L~5(>zob738INq6%K" 00:07:55.226 }' 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:55.226 { 00:07:55.226 "nqn": "nqn.2016-06.io.spdk:cnode6826", 00:07:55.226 "serial_number": "Wop L~5(>zob738INq6%K", 00:07:55.226 "method": "nvmf_create_subsystem", 00:07:55.226 "req_id": 1 00:07:55.226 } 00:07:55.226 Got JSON-RPC error response 00:07:55.226 response: 00:07:55.226 { 00:07:55.226 "code": -32602, 00:07:55.226 "message": "Invalid SN Wop L~5(>zob738INq6%K" 00:07:55.226 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:55.226 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.227 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:07:55.228 10:20:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '}nIQ0pt` /dev/null' 00:07:58.103 10:20:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.003 10:20:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:00.003 00:08:00.003 real 0m9.304s 00:08:00.003 user 0m22.697s 00:08:00.003 sys 0m2.519s 00:08:00.003 10:20:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.003 10:20:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:00.003 ************************************ 00:08:00.003 END TEST nvmf_invalid 00:08:00.003 ************************************ 00:08:00.003 10:20:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:00.003 10:20:54 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:00.003 10:20:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.003 10:20:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.003 10:20:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.003 ************************************ 00:08:00.003 START TEST nvmf_abort 00:08:00.003 ************************************ 00:08:00.003 10:20:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:00.263 * Looking for test storage... 00:08:00.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.263 10:20:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:02.162 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:02.162 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:02.162 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.162 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:02.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.163 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:08:02.433 00:08:02.433 --- 10.0.0.2 ping statistics --- 00:08:02.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.433 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:08:02.433 00:08:02.433 --- 10.0.0.1 ping statistics --- 00:08:02.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.433 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2229114 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2229114 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2229114 ']' 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.433 10:20:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:02.433 [2024-07-15 10:20:56.980682] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:02.433 [2024-07-15 10:20:56.980758] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.433 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.433 [2024-07-15 10:20:57.050045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.690 [2024-07-15 10:20:57.172222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.690 [2024-07-15 10:20:57.172288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.690 [2024-07-15 10:20:57.172304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.690 [2024-07-15 10:20:57.172317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.690 [2024-07-15 10:20:57.172329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.690 [2024-07-15 10:20:57.172424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.690 [2024-07-15 10:20:57.172492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.690 [2024-07-15 10:20:57.172495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 [2024-07-15 10:20:57.954265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 Malloc0 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.620 10:20:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 Delay0 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 [2024-07-15 10:20:58.023975] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.620 10:20:58 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:03.620 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.620 [2024-07-15 10:20:58.130352] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:06.144 Initializing NVMe Controllers 00:08:06.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:06.144 controller IO queue size 128 less than required 00:08:06.144 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:06.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:06.144 Initialization complete. Launching workers. 00:08:06.144 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32462 00:08:06.144 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32523, failed to submit 62 00:08:06.144 success 32466, unsuccess 57, failed 0 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:06.144 rmmod nvme_tcp 00:08:06.144 rmmod nvme_fabrics 00:08:06.144 rmmod nvme_keyring 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2229114 ']' 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2229114 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2229114 ']' 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2229114 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:06.144 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2229114 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2229114' 00:08:06.145 killing process with pid 2229114 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2229114 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2229114 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.145 10:21:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.680 10:21:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.680 00:08:08.680 real 0m8.080s 00:08:08.680 user 0m12.923s 00:08:08.680 sys 0m2.618s 00:08:08.680 10:21:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.680 10:21:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:08.680 ************************************ 00:08:08.680 END TEST nvmf_abort 00:08:08.680 ************************************ 00:08:08.680 10:21:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:08.680 10:21:02 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:08.680 10:21:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.680 10:21:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.680 10:21:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.680 ************************************ 00:08:08.680 START TEST nvmf_ns_hotplug_stress 00:08:08.680 ************************************ 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:08.680 * Looking for test storage... 00:08:08.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.680 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.681 10:21:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:10.584 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:10.584 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:10.584 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:10.584 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:10.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:08:10.585 00:08:10.585 --- 10.0.0.2 ping statistics --- 00:08:10.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.585 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:10.585 00:08:10.585 --- 10.0.0.1 ping statistics --- 00:08:10.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.585 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2231640 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2231640 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2231640 ']' 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.585 10:21:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.585 [2024-07-15 10:21:05.045821] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:10.585 [2024-07-15 10:21:05.045921] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.585 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.585 [2024-07-15 10:21:05.109571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.585 [2024-07-15 10:21:05.220689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.585 [2024-07-15 10:21:05.220753] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.585 [2024-07-15 10:21:05.220782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.585 [2024-07-15 10:21:05.220793] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.585 [2024-07-15 10:21:05.220803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.585 [2024-07-15 10:21:05.220901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.585 [2024-07-15 10:21:05.221019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.585 [2024-07-15 10:21:05.221022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.843 10:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.843 10:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:10.843 10:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.843 10:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.843 10:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.843 10:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.843 10:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:10.843 10:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:11.100 [2024-07-15 10:21:05.583310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.100 10:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:11.358 10:21:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.615 [2024-07-15 10:21:06.102071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.615 10:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.872 10:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:12.130 Malloc0 00:08:12.130 10:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:12.388 Delay0 00:08:12.388 10:21:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.646 10:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:12.904 NULL1 00:08:12.904 10:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:13.161 10:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2232236 00:08:13.161 10:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:13.161 10:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:13.161 10:21:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.161 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.531 Read completed with error (sct=0, sc=11) 00:08:14.531 10:21:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.823 10:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:14.823 10:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:15.081 true 00:08:15.081 10:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:15.081 10:21:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.647 10:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.904 10:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:15.904 10:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:16.162 true 00:08:16.162 10:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:16.162 10:21:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.423 10:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.683 10:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:16.683 10:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:16.939 true 00:08:16.939 10:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:16.939 10:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.196 10:21:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.454 10:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:17.454 10:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:17.711 true 00:08:17.711 10:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:17.711 10:21:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.084 10:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.084 10:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:19.084 10:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:19.342 true 00:08:19.342 10:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:19.342 10:21:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.599 10:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.857 10:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:19.857 10:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:20.114 true 00:08:20.114 10:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:20.114 10:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.370 10:21:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.626 10:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:20.626 10:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:20.883 true 00:08:20.883 10:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:20.883 10:21:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.815 10:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.073 10:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:22.073 10:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:22.331 true 00:08:22.331 10:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:22.331 10:21:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.588 10:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.846 10:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:22.846 10:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:23.103 true 00:08:23.103 10:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:23.103 10:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.360 10:21:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.618 10:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:23.618 10:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:23.876 true 00:08:23.876 10:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:23.876 10:21:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.808 10:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.066 10:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:25.066 10:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:25.323 true 00:08:25.323 10:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:25.323 10:21:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.887 10:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.887 10:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:25.887 10:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:26.144 true 00:08:26.144 10:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:26.144 10:21:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.401 10:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.658 10:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:26.658 10:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:26.914 true 00:08:26.914 10:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:26.914 10:21:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.844 10:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.442 10:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:28.443 10:21:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:28.443 true 00:08:28.443 10:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:28.443 10:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.699 10:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.955 10:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:28.955 10:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:29.212 true 00:08:29.212 10:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:29.212 10:21:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.469 10:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.726 10:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:29.726 10:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:29.983 true 00:08:29.983 10:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:29.983 10:21:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.914 10:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.171 10:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:31.171 10:21:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:31.429 true 00:08:31.686 10:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:31.686 10:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.251 10:21:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.508 10:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:32.508 10:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:32.766 true 00:08:32.766 10:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:32.766 10:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.330 10:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.330 10:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:33.330 10:21:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:33.588 true 00:08:33.588 10:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:33.588 10:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.845 10:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.103 10:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:34.103 10:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:34.360 true 00:08:34.360 10:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:34.361 10:21:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.730 10:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.730 10:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:35.730 10:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:35.988 true 00:08:35.988 10:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:35.988 10:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.245 10:21:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.503 10:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:36.503 10:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:36.761 true 00:08:36.761 10:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:36.761 10:21:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.694 10:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.950 10:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:37.950 10:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:38.207 true 00:08:38.207 10:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:38.207 10:21:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.464 10:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.722 10:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:38.722 10:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:38.979 true 00:08:38.979 10:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:38.979 10:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.236 10:21:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.494 10:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:39.494 10:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:39.752 true 00:08:39.752 10:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:39.752 10:21:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.120 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.120 10:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.120 10:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:41.120 10:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:41.377 true 00:08:41.377 10:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:41.377 10:21:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.634 10:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.892 10:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:41.892 10:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:42.183 true 00:08:42.183 10:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:42.183 10:21:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.114 10:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.371 10:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:43.371 10:21:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:43.627 Initializing NVMe Controllers 00:08:43.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.627 Controller IO queue size 128, less than required. 00:08:43.627 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.627 Controller IO queue size 128, less than required. 00:08:43.627 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:43.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:43.627 Initialization complete. Launching workers. 00:08:43.627 ======================================================== 00:08:43.627 Latency(us) 00:08:43.627 Device Information : IOPS MiB/s Average min max 00:08:43.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 672.34 0.33 86254.10 3381.30 1041094.62 00:08:43.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9812.81 4.79 13045.58 3630.58 448666.85 00:08:43.627 ======================================================== 00:08:43.627 Total : 10485.15 5.12 17739.94 3381.30 1041094.62 00:08:43.627 00:08:43.627 true 00:08:43.627 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2232236 00:08:43.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2232236) - No such process 00:08:43.627 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2232236 00:08:43.627 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.884 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:44.141 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:44.141 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:44.141 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:44.141 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.141 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:44.398 null0 00:08:44.398 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.398 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.398 10:21:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:44.654 null1 00:08:44.654 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.654 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.654 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:44.912 null2 00:08:44.912 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:44.912 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:44.912 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:45.169 null3 00:08:45.169 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:45.169 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:45.169 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:45.427 null4 00:08:45.427 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:45.427 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:45.427 10:21:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:45.684 null5 00:08:45.684 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:45.684 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:45.684 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:45.940 null6 00:08:45.940 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:45.940 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:45.940 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:46.198 null7 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2236450 2236451 2236453 2236455 2236457 2236459 2236461 2236463 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.198 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:46.456 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.456 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:46.456 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.456 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:46.456 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.456 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.456 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:46.456 10:21:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:46.715 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:46.973 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:46.973 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:46.973 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.973 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.973 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:46.973 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:46.973 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:46.973 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.231 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.231 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.231 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.232 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:47.490 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:47.490 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:47.490 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:47.490 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:47.490 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.490 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:47.490 10:21:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:47.490 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.748 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:47.749 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.749 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:47.749 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:47.749 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:48.006 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:48.006 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:48.006 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:48.006 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.006 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:48.006 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:48.006 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.006 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.265 10:21:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:48.523 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:48.523 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:48.523 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:48.523 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.523 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.523 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.523 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:48.523 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:48.780 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:49.037 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:49.037 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:49.037 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:49.037 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.037 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.037 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:49.037 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:49.037 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:49.294 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:49.295 10:21:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:49.552 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.811 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:49.811 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:49.811 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:49.811 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:49.811 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:49.811 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.811 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:50.068 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.068 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.069 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:50.327 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.327 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:50.327 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:50.327 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:50.327 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:50.327 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:50.327 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:50.327 10:21:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:50.584 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.584 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.584 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:50.585 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:50.843 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:50.843 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.843 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:50.843 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:50.843 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:50.843 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:50.843 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:50.843 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.101 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.101 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.101 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:51.101 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.101 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.101 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:51.101 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.101 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.102 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:51.360 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:51.360 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.360 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:51.360 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:51.360 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:51.360 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:51.360 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.360 10:21:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:51.617 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.617 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.617 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.617 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.617 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.617 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.617 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.617 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.618 rmmod nvme_tcp 00:08:51.618 rmmod nvme_fabrics 00:08:51.618 rmmod nvme_keyring 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2231640 ']' 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2231640 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2231640 ']' 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2231640 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2231640 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2231640' 00:08:51.618 killing process with pid 2231640 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2231640 00:08:51.618 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2231640 00:08:52.184 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:52.185 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:52.185 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:52.185 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:52.185 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:52.185 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.185 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.185 10:21:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.087 10:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:54.087 00:08:54.087 real 0m45.834s 00:08:54.087 user 3m24.620s 00:08:54.087 sys 0m18.010s 00:08:54.087 10:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.087 10:21:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.087 ************************************ 00:08:54.087 END TEST nvmf_ns_hotplug_stress 00:08:54.087 ************************************ 00:08:54.087 10:21:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:54.087 10:21:48 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:54.087 10:21:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:54.087 10:21:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.087 10:21:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:54.087 ************************************ 00:08:54.087 START TEST nvmf_connect_stress 00:08:54.087 ************************************ 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:54.087 * Looking for test storage... 00:08:54.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.087 10:21:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:54.088 10:21:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:56.023 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:56.023 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:56.023 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:56.023 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.023 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.024 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.024 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:56.024 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:56.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:08:56.283 00:08:56.283 --- 10.0.0.2 ping statistics --- 00:08:56.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.283 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:08:56.283 00:08:56.283 --- 10.0.0.1 ping statistics --- 00:08:56.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.283 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2239216 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2239216 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2239216 ']' 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.283 10:21:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.283 [2024-07-15 10:21:50.767131] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:56.283 [2024-07-15 10:21:50.767223] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.283 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.283 [2024-07-15 10:21:50.834004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:56.542 [2024-07-15 10:21:50.946342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.542 [2024-07-15 10:21:50.946405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.542 [2024-07-15 10:21:50.946419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.542 [2024-07-15 10:21:50.946430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.542 [2024-07-15 10:21:50.946439] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.542 [2024-07-15 10:21:50.946566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.542 [2024-07-15 10:21:50.946629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.542 [2024-07-15 10:21:50.946632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.542 [2024-07-15 10:21:51.095121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.542 [2024-07-15 10:21:51.128063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.542 NULL1 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2239356 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.542 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.108 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.108 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:08:57.108 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.108 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.108 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.366 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.366 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:08:57.366 10:21:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.366 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.366 10:21:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.624 10:21:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.624 10:21:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:08:57.624 10:21:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.624 10:21:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.624 10:21:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.881 10:21:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.881 10:21:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:08:57.881 10:21:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.881 10:21:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.881 10:21:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.162 10:21:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.162 10:21:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:08:58.162 10:21:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:58.162 10:21:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.162 10:21:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.725 10:21:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.725 10:21:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:08:58.725 10:21:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:58.725 10:21:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.725 10:21:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.982 10:21:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.982 10:21:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:08:58.982 10:21:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:58.982 10:21:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.982 10:21:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.239 10:21:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.239 10:21:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:08:59.239 10:21:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:59.239 10:21:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.239 10:21:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.496 10:21:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.497 10:21:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:08:59.497 10:21:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:59.497 10:21:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.497 10:21:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:59.754 10:21:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.754 10:21:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:00.011 10:21:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.011 10:21:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.011 10:21:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.268 10:21:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.268 10:21:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:00.268 10:21:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.268 10:21:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.268 10:21:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.525 10:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.525 10:21:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:00.525 10:21:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.525 10:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.525 10:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.782 10:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.782 10:21:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:00.782 10:21:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:00.782 10:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.782 10:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.040 10:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.040 10:21:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:01.040 10:21:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.040 10:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.040 10:21:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.606 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.606 10:21:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:01.606 10:21:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.606 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.606 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.864 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.864 10:21:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:01.864 10:21:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:01.864 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.864 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.121 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.121 10:21:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:02.121 10:21:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.121 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.121 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.379 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.379 10:21:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:02.379 10:21:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.379 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.379 10:21:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.944 10:21:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.944 10:21:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:02.944 10:21:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:02.944 10:21:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.944 10:21:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.201 10:21:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.201 10:21:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:03.201 10:21:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.201 10:21:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.201 10:21:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.459 10:21:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.459 10:21:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:03.459 10:21:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.459 10:21:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.459 10:21:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.717 10:21:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.717 10:21:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:03.717 10:21:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.717 10:21:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.717 10:21:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.974 10:21:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.974 10:21:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:03.974 10:21:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.974 10:21:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.974 10:21:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.539 10:21:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.539 10:21:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:04.539 10:21:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.539 10:21:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.539 10:21:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.796 10:21:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.796 10:21:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:04.796 10:21:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.796 10:21:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.796 10:21:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.054 10:21:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.054 10:21:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:05.054 10:21:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.054 10:21:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.054 10:21:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.310 10:21:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.310 10:21:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:05.310 10:21:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.310 10:21:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.310 10:21:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.620 10:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.620 10:22:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:05.620 10:22:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.620 10:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.620 10:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.877 10:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.877 10:22:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:05.877 10:22:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.877 10:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.877 10:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.439 10:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.439 10:22:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:06.439 10:22:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.439 10:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.439 10:22:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.695 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.695 10:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:06.695 10:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.695 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.696 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.696 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2239356 00:09:06.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2239356) - No such process 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2239356 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.953 rmmod nvme_tcp 00:09:06.953 rmmod nvme_fabrics 00:09:06.953 rmmod nvme_keyring 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2239216 ']' 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2239216 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2239216 ']' 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2239216 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2239216 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2239216' 00:09:06.953 killing process with pid 2239216 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2239216 00:09:06.953 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2239216 00:09:07.212 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.212 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.212 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.212 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.212 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.212 10:22:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.212 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.212 10:22:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.753 10:22:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:09.753 00:09:09.753 real 0m15.263s 00:09:09.753 user 0m38.404s 00:09:09.753 sys 0m5.881s 00:09:09.753 10:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.753 10:22:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.753 ************************************ 00:09:09.753 END TEST nvmf_connect_stress 00:09:09.753 ************************************ 00:09:09.753 10:22:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:09.753 10:22:03 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:09.753 10:22:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.753 10:22:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.753 10:22:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.753 ************************************ 00:09:09.753 START TEST nvmf_fused_ordering 00:09:09.753 ************************************ 00:09:09.753 10:22:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:09.753 * Looking for test storage... 00:09:09.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.753 10:22:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.754 10:22:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.754 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.754 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.754 10:22:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.754 10:22:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:11.712 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:11.712 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:11.712 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:11.712 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.712 10:22:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:11.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:09:11.712 00:09:11.712 --- 10.0.0.2 ping statistics --- 00:09:11.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.712 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:09:11.712 00:09:11.712 --- 10.0.0.1 ping statistics --- 00:09:11.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.712 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:11.712 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2242514 00:09:11.713 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:11.713 10:22:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2242514 00:09:11.713 10:22:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2242514 ']' 00:09:11.713 10:22:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.713 10:22:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:11.713 10:22:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.713 10:22:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:11.713 10:22:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:11.713 [2024-07-15 10:22:06.182388] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:11.713 [2024-07-15 10:22:06.182472] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.713 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.713 [2024-07-15 10:22:06.249492] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.971 [2024-07-15 10:22:06.365248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.971 [2024-07-15 10:22:06.365296] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.971 [2024-07-15 10:22:06.365324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.971 [2024-07-15 10:22:06.365336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.971 [2024-07-15 10:22:06.365346] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.971 [2024-07-15 10:22:06.365372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:12.537 [2024-07-15 10:22:07.138357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:12.537 [2024-07-15 10:22:07.154499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:12.537 NULL1 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.537 10:22:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:12.795 [2024-07-15 10:22:07.199757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:12.795 [2024-07-15 10:22:07.199798] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242670 ] 00:09:12.795 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.053 Attached to nqn.2016-06.io.spdk:cnode1 00:09:13.053 Namespace ID: 1 size: 1GB 00:09:13.053 fused_ordering(0) 00:09:13.053 fused_ordering(1) 00:09:13.053 fused_ordering(2) 00:09:13.053 fused_ordering(3) 00:09:13.053 fused_ordering(4) 00:09:13.053 fused_ordering(5) 00:09:13.053 fused_ordering(6) 00:09:13.053 fused_ordering(7) 00:09:13.053 fused_ordering(8) 00:09:13.053 fused_ordering(9) 00:09:13.053 fused_ordering(10) 00:09:13.053 fused_ordering(11) 00:09:13.053 fused_ordering(12) 00:09:13.053 fused_ordering(13) 00:09:13.053 fused_ordering(14) 00:09:13.053 fused_ordering(15) 00:09:13.053 fused_ordering(16) 00:09:13.053 fused_ordering(17) 00:09:13.053 fused_ordering(18) 00:09:13.053 fused_ordering(19) 00:09:13.053 fused_ordering(20) 00:09:13.053 fused_ordering(21) 00:09:13.053 fused_ordering(22) 00:09:13.053 fused_ordering(23) 00:09:13.053 fused_ordering(24) 00:09:13.053 fused_ordering(25) 00:09:13.053 fused_ordering(26) 00:09:13.053 fused_ordering(27) 00:09:13.053 fused_ordering(28) 00:09:13.053 fused_ordering(29) 00:09:13.053 fused_ordering(30) 00:09:13.053 fused_ordering(31) 00:09:13.053 fused_ordering(32) 00:09:13.053 fused_ordering(33) 00:09:13.053 fused_ordering(34) 00:09:13.053 fused_ordering(35) 00:09:13.053 fused_ordering(36) 00:09:13.053 fused_ordering(37) 00:09:13.053 fused_ordering(38) 00:09:13.053 fused_ordering(39) 00:09:13.053 fused_ordering(40) 00:09:13.053 fused_ordering(41) 00:09:13.053 fused_ordering(42) 00:09:13.053 fused_ordering(43) 00:09:13.053 fused_ordering(44) 00:09:13.053 fused_ordering(45) 00:09:13.053 fused_ordering(46) 00:09:13.053 fused_ordering(47) 00:09:13.053 fused_ordering(48) 00:09:13.053 fused_ordering(49) 00:09:13.053 fused_ordering(50) 00:09:13.053 fused_ordering(51) 00:09:13.053 fused_ordering(52) 00:09:13.053 fused_ordering(53) 00:09:13.053 fused_ordering(54) 00:09:13.053 fused_ordering(55) 00:09:13.053 fused_ordering(56) 00:09:13.053 fused_ordering(57) 00:09:13.053 fused_ordering(58) 00:09:13.053 fused_ordering(59) 00:09:13.053 fused_ordering(60) 00:09:13.053 fused_ordering(61) 00:09:13.053 fused_ordering(62) 00:09:13.053 fused_ordering(63) 00:09:13.053 fused_ordering(64) 00:09:13.053 fused_ordering(65) 00:09:13.053 fused_ordering(66) 00:09:13.053 fused_ordering(67) 00:09:13.053 fused_ordering(68) 00:09:13.053 fused_ordering(69) 00:09:13.053 fused_ordering(70) 00:09:13.053 fused_ordering(71) 00:09:13.053 fused_ordering(72) 00:09:13.053 fused_ordering(73) 00:09:13.053 fused_ordering(74) 00:09:13.053 fused_ordering(75) 00:09:13.053 fused_ordering(76) 00:09:13.053 fused_ordering(77) 00:09:13.053 fused_ordering(78) 00:09:13.053 fused_ordering(79) 00:09:13.053 fused_ordering(80) 00:09:13.053 fused_ordering(81) 00:09:13.053 fused_ordering(82) 00:09:13.053 fused_ordering(83) 00:09:13.053 fused_ordering(84) 00:09:13.053 fused_ordering(85) 00:09:13.053 fused_ordering(86) 00:09:13.053 fused_ordering(87) 00:09:13.053 fused_ordering(88) 00:09:13.053 fused_ordering(89) 00:09:13.053 fused_ordering(90) 00:09:13.053 fused_ordering(91) 00:09:13.053 fused_ordering(92) 00:09:13.053 fused_ordering(93) 00:09:13.053 fused_ordering(94) 00:09:13.053 fused_ordering(95) 00:09:13.053 fused_ordering(96) 00:09:13.053 fused_ordering(97) 00:09:13.053 fused_ordering(98) 00:09:13.053 fused_ordering(99) 00:09:13.053 fused_ordering(100) 00:09:13.053 fused_ordering(101) 00:09:13.053 fused_ordering(102) 00:09:13.053 fused_ordering(103) 00:09:13.053 fused_ordering(104) 00:09:13.053 fused_ordering(105) 00:09:13.053 fused_ordering(106) 00:09:13.053 fused_ordering(107) 00:09:13.053 fused_ordering(108) 00:09:13.053 fused_ordering(109) 00:09:13.053 fused_ordering(110) 00:09:13.053 fused_ordering(111) 00:09:13.053 fused_ordering(112) 00:09:13.053 fused_ordering(113) 00:09:13.053 fused_ordering(114) 00:09:13.053 fused_ordering(115) 00:09:13.053 fused_ordering(116) 00:09:13.053 fused_ordering(117) 00:09:13.053 fused_ordering(118) 00:09:13.053 fused_ordering(119) 00:09:13.053 fused_ordering(120) 00:09:13.053 fused_ordering(121) 00:09:13.053 fused_ordering(122) 00:09:13.053 fused_ordering(123) 00:09:13.053 fused_ordering(124) 00:09:13.053 fused_ordering(125) 00:09:13.053 fused_ordering(126) 00:09:13.053 fused_ordering(127) 00:09:13.053 fused_ordering(128) 00:09:13.053 fused_ordering(129) 00:09:13.053 fused_ordering(130) 00:09:13.053 fused_ordering(131) 00:09:13.053 fused_ordering(132) 00:09:13.053 fused_ordering(133) 00:09:13.053 fused_ordering(134) 00:09:13.053 fused_ordering(135) 00:09:13.053 fused_ordering(136) 00:09:13.053 fused_ordering(137) 00:09:13.053 fused_ordering(138) 00:09:13.053 fused_ordering(139) 00:09:13.053 fused_ordering(140) 00:09:13.053 fused_ordering(141) 00:09:13.053 fused_ordering(142) 00:09:13.053 fused_ordering(143) 00:09:13.053 fused_ordering(144) 00:09:13.053 fused_ordering(145) 00:09:13.053 fused_ordering(146) 00:09:13.053 fused_ordering(147) 00:09:13.053 fused_ordering(148) 00:09:13.053 fused_ordering(149) 00:09:13.053 fused_ordering(150) 00:09:13.053 fused_ordering(151) 00:09:13.053 fused_ordering(152) 00:09:13.053 fused_ordering(153) 00:09:13.053 fused_ordering(154) 00:09:13.053 fused_ordering(155) 00:09:13.053 fused_ordering(156) 00:09:13.053 fused_ordering(157) 00:09:13.053 fused_ordering(158) 00:09:13.053 fused_ordering(159) 00:09:13.053 fused_ordering(160) 00:09:13.053 fused_ordering(161) 00:09:13.053 fused_ordering(162) 00:09:13.053 fused_ordering(163) 00:09:13.053 fused_ordering(164) 00:09:13.053 fused_ordering(165) 00:09:13.053 fused_ordering(166) 00:09:13.053 fused_ordering(167) 00:09:13.053 fused_ordering(168) 00:09:13.053 fused_ordering(169) 00:09:13.053 fused_ordering(170) 00:09:13.053 fused_ordering(171) 00:09:13.053 fused_ordering(172) 00:09:13.053 fused_ordering(173) 00:09:13.053 fused_ordering(174) 00:09:13.053 fused_ordering(175) 00:09:13.053 fused_ordering(176) 00:09:13.053 fused_ordering(177) 00:09:13.053 fused_ordering(178) 00:09:13.053 fused_ordering(179) 00:09:13.053 fused_ordering(180) 00:09:13.053 fused_ordering(181) 00:09:13.053 fused_ordering(182) 00:09:13.053 fused_ordering(183) 00:09:13.053 fused_ordering(184) 00:09:13.053 fused_ordering(185) 00:09:13.053 fused_ordering(186) 00:09:13.053 fused_ordering(187) 00:09:13.053 fused_ordering(188) 00:09:13.053 fused_ordering(189) 00:09:13.053 fused_ordering(190) 00:09:13.053 fused_ordering(191) 00:09:13.053 fused_ordering(192) 00:09:13.053 fused_ordering(193) 00:09:13.053 fused_ordering(194) 00:09:13.053 fused_ordering(195) 00:09:13.053 fused_ordering(196) 00:09:13.053 fused_ordering(197) 00:09:13.053 fused_ordering(198) 00:09:13.053 fused_ordering(199) 00:09:13.053 fused_ordering(200) 00:09:13.053 fused_ordering(201) 00:09:13.053 fused_ordering(202) 00:09:13.053 fused_ordering(203) 00:09:13.053 fused_ordering(204) 00:09:13.053 fused_ordering(205) 00:09:13.619 fused_ordering(206) 00:09:13.619 fused_ordering(207) 00:09:13.619 fused_ordering(208) 00:09:13.619 fused_ordering(209) 00:09:13.619 fused_ordering(210) 00:09:13.619 fused_ordering(211) 00:09:13.619 fused_ordering(212) 00:09:13.619 fused_ordering(213) 00:09:13.619 fused_ordering(214) 00:09:13.619 fused_ordering(215) 00:09:13.619 fused_ordering(216) 00:09:13.619 fused_ordering(217) 00:09:13.619 fused_ordering(218) 00:09:13.619 fused_ordering(219) 00:09:13.619 fused_ordering(220) 00:09:13.619 fused_ordering(221) 00:09:13.619 fused_ordering(222) 00:09:13.619 fused_ordering(223) 00:09:13.619 fused_ordering(224) 00:09:13.619 fused_ordering(225) 00:09:13.619 fused_ordering(226) 00:09:13.619 fused_ordering(227) 00:09:13.619 fused_ordering(228) 00:09:13.619 fused_ordering(229) 00:09:13.619 fused_ordering(230) 00:09:13.619 fused_ordering(231) 00:09:13.619 fused_ordering(232) 00:09:13.619 fused_ordering(233) 00:09:13.619 fused_ordering(234) 00:09:13.619 fused_ordering(235) 00:09:13.619 fused_ordering(236) 00:09:13.619 fused_ordering(237) 00:09:13.619 fused_ordering(238) 00:09:13.619 fused_ordering(239) 00:09:13.619 fused_ordering(240) 00:09:13.619 fused_ordering(241) 00:09:13.619 fused_ordering(242) 00:09:13.619 fused_ordering(243) 00:09:13.619 fused_ordering(244) 00:09:13.619 fused_ordering(245) 00:09:13.619 fused_ordering(246) 00:09:13.619 fused_ordering(247) 00:09:13.619 fused_ordering(248) 00:09:13.619 fused_ordering(249) 00:09:13.619 fused_ordering(250) 00:09:13.619 fused_ordering(251) 00:09:13.619 fused_ordering(252) 00:09:13.619 fused_ordering(253) 00:09:13.619 fused_ordering(254) 00:09:13.619 fused_ordering(255) 00:09:13.619 fused_ordering(256) 00:09:13.619 fused_ordering(257) 00:09:13.619 fused_ordering(258) 00:09:13.619 fused_ordering(259) 00:09:13.619 fused_ordering(260) 00:09:13.619 fused_ordering(261) 00:09:13.619 fused_ordering(262) 00:09:13.619 fused_ordering(263) 00:09:13.619 fused_ordering(264) 00:09:13.619 fused_ordering(265) 00:09:13.619 fused_ordering(266) 00:09:13.619 fused_ordering(267) 00:09:13.619 fused_ordering(268) 00:09:13.619 fused_ordering(269) 00:09:13.619 fused_ordering(270) 00:09:13.619 fused_ordering(271) 00:09:13.619 fused_ordering(272) 00:09:13.619 fused_ordering(273) 00:09:13.619 fused_ordering(274) 00:09:13.619 fused_ordering(275) 00:09:13.619 fused_ordering(276) 00:09:13.619 fused_ordering(277) 00:09:13.619 fused_ordering(278) 00:09:13.619 fused_ordering(279) 00:09:13.619 fused_ordering(280) 00:09:13.619 fused_ordering(281) 00:09:13.619 fused_ordering(282) 00:09:13.619 fused_ordering(283) 00:09:13.619 fused_ordering(284) 00:09:13.619 fused_ordering(285) 00:09:13.619 fused_ordering(286) 00:09:13.619 fused_ordering(287) 00:09:13.619 fused_ordering(288) 00:09:13.619 fused_ordering(289) 00:09:13.619 fused_ordering(290) 00:09:13.619 fused_ordering(291) 00:09:13.619 fused_ordering(292) 00:09:13.619 fused_ordering(293) 00:09:13.619 fused_ordering(294) 00:09:13.619 fused_ordering(295) 00:09:13.619 fused_ordering(296) 00:09:13.619 fused_ordering(297) 00:09:13.619 fused_ordering(298) 00:09:13.619 fused_ordering(299) 00:09:13.619 fused_ordering(300) 00:09:13.619 fused_ordering(301) 00:09:13.619 fused_ordering(302) 00:09:13.619 fused_ordering(303) 00:09:13.619 fused_ordering(304) 00:09:13.619 fused_ordering(305) 00:09:13.619 fused_ordering(306) 00:09:13.619 fused_ordering(307) 00:09:13.619 fused_ordering(308) 00:09:13.619 fused_ordering(309) 00:09:13.619 fused_ordering(310) 00:09:13.619 fused_ordering(311) 00:09:13.619 fused_ordering(312) 00:09:13.619 fused_ordering(313) 00:09:13.619 fused_ordering(314) 00:09:13.619 fused_ordering(315) 00:09:13.619 fused_ordering(316) 00:09:13.619 fused_ordering(317) 00:09:13.619 fused_ordering(318) 00:09:13.619 fused_ordering(319) 00:09:13.619 fused_ordering(320) 00:09:13.619 fused_ordering(321) 00:09:13.619 fused_ordering(322) 00:09:13.619 fused_ordering(323) 00:09:13.619 fused_ordering(324) 00:09:13.619 fused_ordering(325) 00:09:13.619 fused_ordering(326) 00:09:13.619 fused_ordering(327) 00:09:13.619 fused_ordering(328) 00:09:13.619 fused_ordering(329) 00:09:13.619 fused_ordering(330) 00:09:13.619 fused_ordering(331) 00:09:13.619 fused_ordering(332) 00:09:13.619 fused_ordering(333) 00:09:13.619 fused_ordering(334) 00:09:13.619 fused_ordering(335) 00:09:13.619 fused_ordering(336) 00:09:13.619 fused_ordering(337) 00:09:13.619 fused_ordering(338) 00:09:13.619 fused_ordering(339) 00:09:13.619 fused_ordering(340) 00:09:13.619 fused_ordering(341) 00:09:13.619 fused_ordering(342) 00:09:13.619 fused_ordering(343) 00:09:13.619 fused_ordering(344) 00:09:13.619 fused_ordering(345) 00:09:13.619 fused_ordering(346) 00:09:13.619 fused_ordering(347) 00:09:13.619 fused_ordering(348) 00:09:13.619 fused_ordering(349) 00:09:13.619 fused_ordering(350) 00:09:13.619 fused_ordering(351) 00:09:13.619 fused_ordering(352) 00:09:13.619 fused_ordering(353) 00:09:13.619 fused_ordering(354) 00:09:13.619 fused_ordering(355) 00:09:13.619 fused_ordering(356) 00:09:13.619 fused_ordering(357) 00:09:13.619 fused_ordering(358) 00:09:13.619 fused_ordering(359) 00:09:13.619 fused_ordering(360) 00:09:13.619 fused_ordering(361) 00:09:13.619 fused_ordering(362) 00:09:13.619 fused_ordering(363) 00:09:13.619 fused_ordering(364) 00:09:13.619 fused_ordering(365) 00:09:13.619 fused_ordering(366) 00:09:13.619 fused_ordering(367) 00:09:13.619 fused_ordering(368) 00:09:13.619 fused_ordering(369) 00:09:13.619 fused_ordering(370) 00:09:13.619 fused_ordering(371) 00:09:13.619 fused_ordering(372) 00:09:13.619 fused_ordering(373) 00:09:13.619 fused_ordering(374) 00:09:13.619 fused_ordering(375) 00:09:13.619 fused_ordering(376) 00:09:13.619 fused_ordering(377) 00:09:13.619 fused_ordering(378) 00:09:13.619 fused_ordering(379) 00:09:13.619 fused_ordering(380) 00:09:13.619 fused_ordering(381) 00:09:13.619 fused_ordering(382) 00:09:13.619 fused_ordering(383) 00:09:13.619 fused_ordering(384) 00:09:13.619 fused_ordering(385) 00:09:13.619 fused_ordering(386) 00:09:13.619 fused_ordering(387) 00:09:13.620 fused_ordering(388) 00:09:13.620 fused_ordering(389) 00:09:13.620 fused_ordering(390) 00:09:13.620 fused_ordering(391) 00:09:13.620 fused_ordering(392) 00:09:13.620 fused_ordering(393) 00:09:13.620 fused_ordering(394) 00:09:13.620 fused_ordering(395) 00:09:13.620 fused_ordering(396) 00:09:13.620 fused_ordering(397) 00:09:13.620 fused_ordering(398) 00:09:13.620 fused_ordering(399) 00:09:13.620 fused_ordering(400) 00:09:13.620 fused_ordering(401) 00:09:13.620 fused_ordering(402) 00:09:13.620 fused_ordering(403) 00:09:13.620 fused_ordering(404) 00:09:13.620 fused_ordering(405) 00:09:13.620 fused_ordering(406) 00:09:13.620 fused_ordering(407) 00:09:13.620 fused_ordering(408) 00:09:13.620 fused_ordering(409) 00:09:13.620 fused_ordering(410) 00:09:14.186 fused_ordering(411) 00:09:14.186 fused_ordering(412) 00:09:14.186 fused_ordering(413) 00:09:14.186 fused_ordering(414) 00:09:14.186 fused_ordering(415) 00:09:14.186 fused_ordering(416) 00:09:14.186 fused_ordering(417) 00:09:14.186 fused_ordering(418) 00:09:14.186 fused_ordering(419) 00:09:14.186 fused_ordering(420) 00:09:14.186 fused_ordering(421) 00:09:14.186 fused_ordering(422) 00:09:14.186 fused_ordering(423) 00:09:14.186 fused_ordering(424) 00:09:14.186 fused_ordering(425) 00:09:14.186 fused_ordering(426) 00:09:14.186 fused_ordering(427) 00:09:14.186 fused_ordering(428) 00:09:14.186 fused_ordering(429) 00:09:14.186 fused_ordering(430) 00:09:14.186 fused_ordering(431) 00:09:14.186 fused_ordering(432) 00:09:14.186 fused_ordering(433) 00:09:14.186 fused_ordering(434) 00:09:14.186 fused_ordering(435) 00:09:14.186 fused_ordering(436) 00:09:14.186 fused_ordering(437) 00:09:14.186 fused_ordering(438) 00:09:14.186 fused_ordering(439) 00:09:14.186 fused_ordering(440) 00:09:14.186 fused_ordering(441) 00:09:14.186 fused_ordering(442) 00:09:14.186 fused_ordering(443) 00:09:14.186 fused_ordering(444) 00:09:14.186 fused_ordering(445) 00:09:14.186 fused_ordering(446) 00:09:14.186 fused_ordering(447) 00:09:14.186 fused_ordering(448) 00:09:14.186 fused_ordering(449) 00:09:14.186 fused_ordering(450) 00:09:14.186 fused_ordering(451) 00:09:14.186 fused_ordering(452) 00:09:14.186 fused_ordering(453) 00:09:14.186 fused_ordering(454) 00:09:14.186 fused_ordering(455) 00:09:14.186 fused_ordering(456) 00:09:14.186 fused_ordering(457) 00:09:14.186 fused_ordering(458) 00:09:14.186 fused_ordering(459) 00:09:14.186 fused_ordering(460) 00:09:14.186 fused_ordering(461) 00:09:14.186 fused_ordering(462) 00:09:14.186 fused_ordering(463) 00:09:14.186 fused_ordering(464) 00:09:14.186 fused_ordering(465) 00:09:14.186 fused_ordering(466) 00:09:14.186 fused_ordering(467) 00:09:14.186 fused_ordering(468) 00:09:14.186 fused_ordering(469) 00:09:14.186 fused_ordering(470) 00:09:14.186 fused_ordering(471) 00:09:14.186 fused_ordering(472) 00:09:14.186 fused_ordering(473) 00:09:14.186 fused_ordering(474) 00:09:14.186 fused_ordering(475) 00:09:14.186 fused_ordering(476) 00:09:14.186 fused_ordering(477) 00:09:14.186 fused_ordering(478) 00:09:14.186 fused_ordering(479) 00:09:14.186 fused_ordering(480) 00:09:14.186 fused_ordering(481) 00:09:14.186 fused_ordering(482) 00:09:14.186 fused_ordering(483) 00:09:14.186 fused_ordering(484) 00:09:14.186 fused_ordering(485) 00:09:14.186 fused_ordering(486) 00:09:14.186 fused_ordering(487) 00:09:14.186 fused_ordering(488) 00:09:14.186 fused_ordering(489) 00:09:14.186 fused_ordering(490) 00:09:14.186 fused_ordering(491) 00:09:14.186 fused_ordering(492) 00:09:14.186 fused_ordering(493) 00:09:14.186 fused_ordering(494) 00:09:14.186 fused_ordering(495) 00:09:14.186 fused_ordering(496) 00:09:14.186 fused_ordering(497) 00:09:14.186 fused_ordering(498) 00:09:14.186 fused_ordering(499) 00:09:14.186 fused_ordering(500) 00:09:14.186 fused_ordering(501) 00:09:14.186 fused_ordering(502) 00:09:14.186 fused_ordering(503) 00:09:14.186 fused_ordering(504) 00:09:14.186 fused_ordering(505) 00:09:14.186 fused_ordering(506) 00:09:14.186 fused_ordering(507) 00:09:14.186 fused_ordering(508) 00:09:14.186 fused_ordering(509) 00:09:14.186 fused_ordering(510) 00:09:14.186 fused_ordering(511) 00:09:14.186 fused_ordering(512) 00:09:14.186 fused_ordering(513) 00:09:14.186 fused_ordering(514) 00:09:14.186 fused_ordering(515) 00:09:14.186 fused_ordering(516) 00:09:14.186 fused_ordering(517) 00:09:14.186 fused_ordering(518) 00:09:14.186 fused_ordering(519) 00:09:14.186 fused_ordering(520) 00:09:14.186 fused_ordering(521) 00:09:14.186 fused_ordering(522) 00:09:14.186 fused_ordering(523) 00:09:14.186 fused_ordering(524) 00:09:14.186 fused_ordering(525) 00:09:14.186 fused_ordering(526) 00:09:14.186 fused_ordering(527) 00:09:14.186 fused_ordering(528) 00:09:14.186 fused_ordering(529) 00:09:14.186 fused_ordering(530) 00:09:14.186 fused_ordering(531) 00:09:14.186 fused_ordering(532) 00:09:14.186 fused_ordering(533) 00:09:14.186 fused_ordering(534) 00:09:14.186 fused_ordering(535) 00:09:14.186 fused_ordering(536) 00:09:14.186 fused_ordering(537) 00:09:14.186 fused_ordering(538) 00:09:14.186 fused_ordering(539) 00:09:14.186 fused_ordering(540) 00:09:14.186 fused_ordering(541) 00:09:14.186 fused_ordering(542) 00:09:14.186 fused_ordering(543) 00:09:14.186 fused_ordering(544) 00:09:14.186 fused_ordering(545) 00:09:14.186 fused_ordering(546) 00:09:14.186 fused_ordering(547) 00:09:14.186 fused_ordering(548) 00:09:14.186 fused_ordering(549) 00:09:14.186 fused_ordering(550) 00:09:14.186 fused_ordering(551) 00:09:14.186 fused_ordering(552) 00:09:14.186 fused_ordering(553) 00:09:14.186 fused_ordering(554) 00:09:14.186 fused_ordering(555) 00:09:14.186 fused_ordering(556) 00:09:14.186 fused_ordering(557) 00:09:14.186 fused_ordering(558) 00:09:14.186 fused_ordering(559) 00:09:14.186 fused_ordering(560) 00:09:14.186 fused_ordering(561) 00:09:14.186 fused_ordering(562) 00:09:14.186 fused_ordering(563) 00:09:14.186 fused_ordering(564) 00:09:14.186 fused_ordering(565) 00:09:14.186 fused_ordering(566) 00:09:14.186 fused_ordering(567) 00:09:14.186 fused_ordering(568) 00:09:14.186 fused_ordering(569) 00:09:14.186 fused_ordering(570) 00:09:14.186 fused_ordering(571) 00:09:14.186 fused_ordering(572) 00:09:14.186 fused_ordering(573) 00:09:14.186 fused_ordering(574) 00:09:14.186 fused_ordering(575) 00:09:14.186 fused_ordering(576) 00:09:14.187 fused_ordering(577) 00:09:14.187 fused_ordering(578) 00:09:14.187 fused_ordering(579) 00:09:14.187 fused_ordering(580) 00:09:14.187 fused_ordering(581) 00:09:14.187 fused_ordering(582) 00:09:14.187 fused_ordering(583) 00:09:14.187 fused_ordering(584) 00:09:14.187 fused_ordering(585) 00:09:14.187 fused_ordering(586) 00:09:14.187 fused_ordering(587) 00:09:14.187 fused_ordering(588) 00:09:14.187 fused_ordering(589) 00:09:14.187 fused_ordering(590) 00:09:14.187 fused_ordering(591) 00:09:14.187 fused_ordering(592) 00:09:14.187 fused_ordering(593) 00:09:14.187 fused_ordering(594) 00:09:14.187 fused_ordering(595) 00:09:14.187 fused_ordering(596) 00:09:14.187 fused_ordering(597) 00:09:14.187 fused_ordering(598) 00:09:14.187 fused_ordering(599) 00:09:14.187 fused_ordering(600) 00:09:14.187 fused_ordering(601) 00:09:14.187 fused_ordering(602) 00:09:14.187 fused_ordering(603) 00:09:14.187 fused_ordering(604) 00:09:14.187 fused_ordering(605) 00:09:14.187 fused_ordering(606) 00:09:14.187 fused_ordering(607) 00:09:14.187 fused_ordering(608) 00:09:14.187 fused_ordering(609) 00:09:14.187 fused_ordering(610) 00:09:14.187 fused_ordering(611) 00:09:14.187 fused_ordering(612) 00:09:14.187 fused_ordering(613) 00:09:14.187 fused_ordering(614) 00:09:14.187 fused_ordering(615) 00:09:14.752 fused_ordering(616) 00:09:14.752 fused_ordering(617) 00:09:14.752 fused_ordering(618) 00:09:14.752 fused_ordering(619) 00:09:14.752 fused_ordering(620) 00:09:14.752 fused_ordering(621) 00:09:14.752 fused_ordering(622) 00:09:14.752 fused_ordering(623) 00:09:14.752 fused_ordering(624) 00:09:14.752 fused_ordering(625) 00:09:14.752 fused_ordering(626) 00:09:14.752 fused_ordering(627) 00:09:14.752 fused_ordering(628) 00:09:14.752 fused_ordering(629) 00:09:14.752 fused_ordering(630) 00:09:14.752 fused_ordering(631) 00:09:14.752 fused_ordering(632) 00:09:14.752 fused_ordering(633) 00:09:14.752 fused_ordering(634) 00:09:14.752 fused_ordering(635) 00:09:14.752 fused_ordering(636) 00:09:14.752 fused_ordering(637) 00:09:14.752 fused_ordering(638) 00:09:14.752 fused_ordering(639) 00:09:14.752 fused_ordering(640) 00:09:14.752 fused_ordering(641) 00:09:14.752 fused_ordering(642) 00:09:14.752 fused_ordering(643) 00:09:14.752 fused_ordering(644) 00:09:14.752 fused_ordering(645) 00:09:14.752 fused_ordering(646) 00:09:14.752 fused_ordering(647) 00:09:14.752 fused_ordering(648) 00:09:14.752 fused_ordering(649) 00:09:14.752 fused_ordering(650) 00:09:14.752 fused_ordering(651) 00:09:14.752 fused_ordering(652) 00:09:14.752 fused_ordering(653) 00:09:14.752 fused_ordering(654) 00:09:14.752 fused_ordering(655) 00:09:14.752 fused_ordering(656) 00:09:14.752 fused_ordering(657) 00:09:14.752 fused_ordering(658) 00:09:14.752 fused_ordering(659) 00:09:14.752 fused_ordering(660) 00:09:14.752 fused_ordering(661) 00:09:14.752 fused_ordering(662) 00:09:14.752 fused_ordering(663) 00:09:14.752 fused_ordering(664) 00:09:14.752 fused_ordering(665) 00:09:14.752 fused_ordering(666) 00:09:14.752 fused_ordering(667) 00:09:14.752 fused_ordering(668) 00:09:14.752 fused_ordering(669) 00:09:14.752 fused_ordering(670) 00:09:14.752 fused_ordering(671) 00:09:14.752 fused_ordering(672) 00:09:14.752 fused_ordering(673) 00:09:14.752 fused_ordering(674) 00:09:14.752 fused_ordering(675) 00:09:14.752 fused_ordering(676) 00:09:14.752 fused_ordering(677) 00:09:14.752 fused_ordering(678) 00:09:14.752 fused_ordering(679) 00:09:14.752 fused_ordering(680) 00:09:14.752 fused_ordering(681) 00:09:14.752 fused_ordering(682) 00:09:14.752 fused_ordering(683) 00:09:14.752 fused_ordering(684) 00:09:14.752 fused_ordering(685) 00:09:14.752 fused_ordering(686) 00:09:14.752 fused_ordering(687) 00:09:14.752 fused_ordering(688) 00:09:14.752 fused_ordering(689) 00:09:14.752 fused_ordering(690) 00:09:14.752 fused_ordering(691) 00:09:14.752 fused_ordering(692) 00:09:14.752 fused_ordering(693) 00:09:14.752 fused_ordering(694) 00:09:14.752 fused_ordering(695) 00:09:14.752 fused_ordering(696) 00:09:14.752 fused_ordering(697) 00:09:14.752 fused_ordering(698) 00:09:14.752 fused_ordering(699) 00:09:14.752 fused_ordering(700) 00:09:14.752 fused_ordering(701) 00:09:14.752 fused_ordering(702) 00:09:14.752 fused_ordering(703) 00:09:14.752 fused_ordering(704) 00:09:14.752 fused_ordering(705) 00:09:14.752 fused_ordering(706) 00:09:14.752 fused_ordering(707) 00:09:14.752 fused_ordering(708) 00:09:14.752 fused_ordering(709) 00:09:14.752 fused_ordering(710) 00:09:14.752 fused_ordering(711) 00:09:14.752 fused_ordering(712) 00:09:14.752 fused_ordering(713) 00:09:14.752 fused_ordering(714) 00:09:14.752 fused_ordering(715) 00:09:14.752 fused_ordering(716) 00:09:14.752 fused_ordering(717) 00:09:14.752 fused_ordering(718) 00:09:14.752 fused_ordering(719) 00:09:14.752 fused_ordering(720) 00:09:14.752 fused_ordering(721) 00:09:14.752 fused_ordering(722) 00:09:14.752 fused_ordering(723) 00:09:14.752 fused_ordering(724) 00:09:14.752 fused_ordering(725) 00:09:14.752 fused_ordering(726) 00:09:14.752 fused_ordering(727) 00:09:14.752 fused_ordering(728) 00:09:14.752 fused_ordering(729) 00:09:14.752 fused_ordering(730) 00:09:14.752 fused_ordering(731) 00:09:14.752 fused_ordering(732) 00:09:14.752 fused_ordering(733) 00:09:14.752 fused_ordering(734) 00:09:14.752 fused_ordering(735) 00:09:14.752 fused_ordering(736) 00:09:14.752 fused_ordering(737) 00:09:14.752 fused_ordering(738) 00:09:14.752 fused_ordering(739) 00:09:14.752 fused_ordering(740) 00:09:14.752 fused_ordering(741) 00:09:14.752 fused_ordering(742) 00:09:14.752 fused_ordering(743) 00:09:14.752 fused_ordering(744) 00:09:14.752 fused_ordering(745) 00:09:14.752 fused_ordering(746) 00:09:14.752 fused_ordering(747) 00:09:14.752 fused_ordering(748) 00:09:14.752 fused_ordering(749) 00:09:14.752 fused_ordering(750) 00:09:14.752 fused_ordering(751) 00:09:14.752 fused_ordering(752) 00:09:14.752 fused_ordering(753) 00:09:14.752 fused_ordering(754) 00:09:14.752 fused_ordering(755) 00:09:14.752 fused_ordering(756) 00:09:14.752 fused_ordering(757) 00:09:14.752 fused_ordering(758) 00:09:14.752 fused_ordering(759) 00:09:14.752 fused_ordering(760) 00:09:14.752 fused_ordering(761) 00:09:14.752 fused_ordering(762) 00:09:14.752 fused_ordering(763) 00:09:14.752 fused_ordering(764) 00:09:14.752 fused_ordering(765) 00:09:14.752 fused_ordering(766) 00:09:14.752 fused_ordering(767) 00:09:14.752 fused_ordering(768) 00:09:14.752 fused_ordering(769) 00:09:14.752 fused_ordering(770) 00:09:14.752 fused_ordering(771) 00:09:14.752 fused_ordering(772) 00:09:14.752 fused_ordering(773) 00:09:14.752 fused_ordering(774) 00:09:14.752 fused_ordering(775) 00:09:14.752 fused_ordering(776) 00:09:14.752 fused_ordering(777) 00:09:14.752 fused_ordering(778) 00:09:14.752 fused_ordering(779) 00:09:14.752 fused_ordering(780) 00:09:14.752 fused_ordering(781) 00:09:14.752 fused_ordering(782) 00:09:14.752 fused_ordering(783) 00:09:14.752 fused_ordering(784) 00:09:14.752 fused_ordering(785) 00:09:14.752 fused_ordering(786) 00:09:14.752 fused_ordering(787) 00:09:14.752 fused_ordering(788) 00:09:14.752 fused_ordering(789) 00:09:14.752 fused_ordering(790) 00:09:14.752 fused_ordering(791) 00:09:14.752 fused_ordering(792) 00:09:14.752 fused_ordering(793) 00:09:14.752 fused_ordering(794) 00:09:14.752 fused_ordering(795) 00:09:14.752 fused_ordering(796) 00:09:14.752 fused_ordering(797) 00:09:14.752 fused_ordering(798) 00:09:14.752 fused_ordering(799) 00:09:14.752 fused_ordering(800) 00:09:14.752 fused_ordering(801) 00:09:14.752 fused_ordering(802) 00:09:14.752 fused_ordering(803) 00:09:14.752 fused_ordering(804) 00:09:14.752 fused_ordering(805) 00:09:14.752 fused_ordering(806) 00:09:14.752 fused_ordering(807) 00:09:14.752 fused_ordering(808) 00:09:14.752 fused_ordering(809) 00:09:14.752 fused_ordering(810) 00:09:14.752 fused_ordering(811) 00:09:14.752 fused_ordering(812) 00:09:14.752 fused_ordering(813) 00:09:14.752 fused_ordering(814) 00:09:14.752 fused_ordering(815) 00:09:14.752 fused_ordering(816) 00:09:14.752 fused_ordering(817) 00:09:14.752 fused_ordering(818) 00:09:14.752 fused_ordering(819) 00:09:14.752 fused_ordering(820) 00:09:15.685 fused_ordering(821) 00:09:15.685 fused_ordering(822) 00:09:15.685 fused_ordering(823) 00:09:15.685 fused_ordering(824) 00:09:15.685 fused_ordering(825) 00:09:15.685 fused_ordering(826) 00:09:15.685 fused_ordering(827) 00:09:15.685 fused_ordering(828) 00:09:15.685 fused_ordering(829) 00:09:15.685 fused_ordering(830) 00:09:15.685 fused_ordering(831) 00:09:15.685 fused_ordering(832) 00:09:15.685 fused_ordering(833) 00:09:15.685 fused_ordering(834) 00:09:15.685 fused_ordering(835) 00:09:15.685 fused_ordering(836) 00:09:15.685 fused_ordering(837) 00:09:15.685 fused_ordering(838) 00:09:15.685 fused_ordering(839) 00:09:15.685 fused_ordering(840) 00:09:15.685 fused_ordering(841) 00:09:15.685 fused_ordering(842) 00:09:15.685 fused_ordering(843) 00:09:15.685 fused_ordering(844) 00:09:15.685 fused_ordering(845) 00:09:15.685 fused_ordering(846) 00:09:15.685 fused_ordering(847) 00:09:15.685 fused_ordering(848) 00:09:15.685 fused_ordering(849) 00:09:15.685 fused_ordering(850) 00:09:15.685 fused_ordering(851) 00:09:15.685 fused_ordering(852) 00:09:15.685 fused_ordering(853) 00:09:15.685 fused_ordering(854) 00:09:15.685 fused_ordering(855) 00:09:15.685 fused_ordering(856) 00:09:15.685 fused_ordering(857) 00:09:15.685 fused_ordering(858) 00:09:15.685 fused_ordering(859) 00:09:15.685 fused_ordering(860) 00:09:15.685 fused_ordering(861) 00:09:15.685 fused_ordering(862) 00:09:15.685 fused_ordering(863) 00:09:15.685 fused_ordering(864) 00:09:15.685 fused_ordering(865) 00:09:15.685 fused_ordering(866) 00:09:15.685 fused_ordering(867) 00:09:15.685 fused_ordering(868) 00:09:15.685 fused_ordering(869) 00:09:15.685 fused_ordering(870) 00:09:15.685 fused_ordering(871) 00:09:15.685 fused_ordering(872) 00:09:15.685 fused_ordering(873) 00:09:15.685 fused_ordering(874) 00:09:15.685 fused_ordering(875) 00:09:15.685 fused_ordering(876) 00:09:15.685 fused_ordering(877) 00:09:15.685 fused_ordering(878) 00:09:15.685 fused_ordering(879) 00:09:15.685 fused_ordering(880) 00:09:15.685 fused_ordering(881) 00:09:15.685 fused_ordering(882) 00:09:15.685 fused_ordering(883) 00:09:15.685 fused_ordering(884) 00:09:15.685 fused_ordering(885) 00:09:15.685 fused_ordering(886) 00:09:15.685 fused_ordering(887) 00:09:15.685 fused_ordering(888) 00:09:15.685 fused_ordering(889) 00:09:15.685 fused_ordering(890) 00:09:15.685 fused_ordering(891) 00:09:15.685 fused_ordering(892) 00:09:15.685 fused_ordering(893) 00:09:15.685 fused_ordering(894) 00:09:15.685 fused_ordering(895) 00:09:15.685 fused_ordering(896) 00:09:15.685 fused_ordering(897) 00:09:15.685 fused_ordering(898) 00:09:15.685 fused_ordering(899) 00:09:15.685 fused_ordering(900) 00:09:15.685 fused_ordering(901) 00:09:15.685 fused_ordering(902) 00:09:15.685 fused_ordering(903) 00:09:15.685 fused_ordering(904) 00:09:15.685 fused_ordering(905) 00:09:15.685 fused_ordering(906) 00:09:15.685 fused_ordering(907) 00:09:15.685 fused_ordering(908) 00:09:15.685 fused_ordering(909) 00:09:15.685 fused_ordering(910) 00:09:15.685 fused_ordering(911) 00:09:15.685 fused_ordering(912) 00:09:15.685 fused_ordering(913) 00:09:15.685 fused_ordering(914) 00:09:15.685 fused_ordering(915) 00:09:15.685 fused_ordering(916) 00:09:15.685 fused_ordering(917) 00:09:15.685 fused_ordering(918) 00:09:15.685 fused_ordering(919) 00:09:15.685 fused_ordering(920) 00:09:15.685 fused_ordering(921) 00:09:15.685 fused_ordering(922) 00:09:15.685 fused_ordering(923) 00:09:15.685 fused_ordering(924) 00:09:15.685 fused_ordering(925) 00:09:15.685 fused_ordering(926) 00:09:15.685 fused_ordering(927) 00:09:15.685 fused_ordering(928) 00:09:15.685 fused_ordering(929) 00:09:15.685 fused_ordering(930) 00:09:15.685 fused_ordering(931) 00:09:15.685 fused_ordering(932) 00:09:15.685 fused_ordering(933) 00:09:15.685 fused_ordering(934) 00:09:15.685 fused_ordering(935) 00:09:15.685 fused_ordering(936) 00:09:15.685 fused_ordering(937) 00:09:15.685 fused_ordering(938) 00:09:15.685 fused_ordering(939) 00:09:15.685 fused_ordering(940) 00:09:15.685 fused_ordering(941) 00:09:15.685 fused_ordering(942) 00:09:15.685 fused_ordering(943) 00:09:15.685 fused_ordering(944) 00:09:15.685 fused_ordering(945) 00:09:15.685 fused_ordering(946) 00:09:15.685 fused_ordering(947) 00:09:15.685 fused_ordering(948) 00:09:15.685 fused_ordering(949) 00:09:15.685 fused_ordering(950) 00:09:15.685 fused_ordering(951) 00:09:15.685 fused_ordering(952) 00:09:15.685 fused_ordering(953) 00:09:15.685 fused_ordering(954) 00:09:15.685 fused_ordering(955) 00:09:15.685 fused_ordering(956) 00:09:15.685 fused_ordering(957) 00:09:15.685 fused_ordering(958) 00:09:15.685 fused_ordering(959) 00:09:15.685 fused_ordering(960) 00:09:15.685 fused_ordering(961) 00:09:15.685 fused_ordering(962) 00:09:15.685 fused_ordering(963) 00:09:15.685 fused_ordering(964) 00:09:15.685 fused_ordering(965) 00:09:15.685 fused_ordering(966) 00:09:15.685 fused_ordering(967) 00:09:15.685 fused_ordering(968) 00:09:15.685 fused_ordering(969) 00:09:15.685 fused_ordering(970) 00:09:15.685 fused_ordering(971) 00:09:15.685 fused_ordering(972) 00:09:15.685 fused_ordering(973) 00:09:15.685 fused_ordering(974) 00:09:15.685 fused_ordering(975) 00:09:15.685 fused_ordering(976) 00:09:15.685 fused_ordering(977) 00:09:15.685 fused_ordering(978) 00:09:15.685 fused_ordering(979) 00:09:15.685 fused_ordering(980) 00:09:15.685 fused_ordering(981) 00:09:15.685 fused_ordering(982) 00:09:15.685 fused_ordering(983) 00:09:15.685 fused_ordering(984) 00:09:15.685 fused_ordering(985) 00:09:15.685 fused_ordering(986) 00:09:15.685 fused_ordering(987) 00:09:15.685 fused_ordering(988) 00:09:15.685 fused_ordering(989) 00:09:15.685 fused_ordering(990) 00:09:15.685 fused_ordering(991) 00:09:15.685 fused_ordering(992) 00:09:15.685 fused_ordering(993) 00:09:15.685 fused_ordering(994) 00:09:15.685 fused_ordering(995) 00:09:15.685 fused_ordering(996) 00:09:15.685 fused_ordering(997) 00:09:15.685 fused_ordering(998) 00:09:15.685 fused_ordering(999) 00:09:15.685 fused_ordering(1000) 00:09:15.685 fused_ordering(1001) 00:09:15.685 fused_ordering(1002) 00:09:15.685 fused_ordering(1003) 00:09:15.685 fused_ordering(1004) 00:09:15.685 fused_ordering(1005) 00:09:15.685 fused_ordering(1006) 00:09:15.685 fused_ordering(1007) 00:09:15.685 fused_ordering(1008) 00:09:15.685 fused_ordering(1009) 00:09:15.685 fused_ordering(1010) 00:09:15.685 fused_ordering(1011) 00:09:15.685 fused_ordering(1012) 00:09:15.685 fused_ordering(1013) 00:09:15.685 fused_ordering(1014) 00:09:15.685 fused_ordering(1015) 00:09:15.685 fused_ordering(1016) 00:09:15.685 fused_ordering(1017) 00:09:15.685 fused_ordering(1018) 00:09:15.685 fused_ordering(1019) 00:09:15.685 fused_ordering(1020) 00:09:15.685 fused_ordering(1021) 00:09:15.685 fused_ordering(1022) 00:09:15.685 fused_ordering(1023) 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:15.685 rmmod nvme_tcp 00:09:15.685 rmmod nvme_fabrics 00:09:15.685 rmmod nvme_keyring 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2242514 ']' 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2242514 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2242514 ']' 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2242514 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2242514 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2242514' 00:09:15.685 killing process with pid 2242514 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2242514 00:09:15.685 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2242514 00:09:15.945 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:15.945 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:15.945 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:15.945 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.945 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:15.945 10:22:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.945 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.945 10:22:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.849 10:22:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.849 00:09:17.849 real 0m8.478s 00:09:17.849 user 0m6.208s 00:09:17.849 sys 0m3.587s 00:09:17.849 10:22:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.849 10:22:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:17.849 ************************************ 00:09:17.849 END TEST nvmf_fused_ordering 00:09:17.849 ************************************ 00:09:17.849 10:22:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:17.849 10:22:12 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:17.849 10:22:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:17.849 10:22:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.849 10:22:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:17.849 ************************************ 00:09:17.849 START TEST nvmf_delete_subsystem 00:09:17.849 ************************************ 00:09:17.849 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:18.108 * Looking for test storage... 00:09:18.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:18.108 10:22:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.020 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:20.021 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:20.021 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:20.021 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:20.021 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:09:20.021 00:09:20.021 --- 10.0.0.2 ping statistics --- 00:09:20.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.021 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:09:20.021 00:09:20.021 --- 10.0.0.1 ping statistics --- 00:09:20.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.021 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2244878 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2244878 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2244878 ']' 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.021 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.021 [2024-07-15 10:22:14.657652] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:20.021 [2024-07-15 10:22:14.657724] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.282 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.282 [2024-07-15 10:22:14.726117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.282 [2024-07-15 10:22:14.836471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.282 [2024-07-15 10:22:14.836532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.282 [2024-07-15 10:22:14.836560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.282 [2024-07-15 10:22:14.836571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.282 [2024-07-15 10:22:14.836581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.282 [2024-07-15 10:22:14.836667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.282 [2024-07-15 10:22:14.836673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 [2024-07-15 10:22:14.987604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.540 10:22:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 [2024-07-15 10:22:15.003988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 NULL1 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 Delay0 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2245014 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:20.540 10:22:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:20.540 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.540 [2024-07-15 10:22:15.078609] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:22.446 10:22:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.446 10:22:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.446 10:22:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 [2024-07-15 10:22:17.209825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe9f8000c00 is same with the state(5) to be set 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 starting I/O failed: -6 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 [2024-07-15 10:22:17.210535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e3e0 is same with the state(5) to be set 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Write completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.706 Read completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Write completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:22.707 Read completed with error (sct=0, sc=8) 00:09:23.643 [2024-07-15 10:22:18.180180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8fac0 is same with the state(5) to be set 00:09:23.643 Write completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Write completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Write completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Write completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Write completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Write completed with error (sct=0, sc=8) 00:09:23.643 Write completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 Write completed with error (sct=0, sc=8) 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.643 [2024-07-15 10:22:18.207786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e5c0 is same with the state(5) to be set 00:09:23.643 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 [2024-07-15 10:22:18.208028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e980 is same with the state(5) to be set 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 [2024-07-15 10:22:18.212105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe9f800d600 is same with the state(5) to be set 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Write completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 Read completed with error (sct=0, sc=8) 00:09:23.644 [2024-07-15 10:22:18.212274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe9f800cfe0 is same with the state(5) to be set 00:09:23.644 Initializing NVMe Controllers 00:09:23.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:23.644 Controller IO queue size 128, less than required. 00:09:23.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:23.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:23.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:23.644 Initialization complete. Launching workers. 00:09:23.644 ======================================================== 00:09:23.644 Latency(us) 00:09:23.644 Device Information : IOPS MiB/s Average min max 00:09:23.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.71 0.08 896356.32 410.10 1012347.95 00:09:23.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.26 0.08 932463.89 582.66 2003711.47 00:09:23.644 ======================================================== 00:09:23.644 Total : 330.97 0.16 914058.23 410.10 2003711.47 00:09:23.644 00:09:23.644 [2024-07-15 10:22:18.213090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8fac0 (9): Bad file descriptor 00:09:23.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:23.644 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.644 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:23.644 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2245014 00:09:23.644 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2245014 00:09:24.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2245014) - No such process 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2245014 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2245014 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2245014 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.214 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.214 [2024-07-15 10:22:18.737496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.215 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.215 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.215 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.215 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.215 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.215 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2245422 00:09:24.215 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:24.215 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2245422 00:09:24.215 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.215 10:22:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:24.215 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.215 [2024-07-15 10:22:18.800967] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:24.782 10:22:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:24.782 10:22:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2245422 00:09:24.782 10:22:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:25.408 10:22:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.408 10:22:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2245422 00:09:25.408 10:22:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:25.672 10:22:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.672 10:22:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2245422 00:09:25.672 10:22:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.257 10:22:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.257 10:22:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2245422 00:09:26.257 10:22:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.822 10:22:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.822 10:22:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2245422 00:09:26.822 10:22:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:27.388 10:22:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:27.388 10:22:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2245422 00:09:27.388 10:22:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:27.646 Initializing NVMe Controllers 00:09:27.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:27.646 Controller IO queue size 128, less than required. 00:09:27.646 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:27.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:27.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:27.646 Initialization complete. Launching workers. 00:09:27.646 ======================================================== 00:09:27.646 Latency(us) 00:09:27.646 Device Information : IOPS MiB/s Average min max 00:09:27.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003877.67 1000202.69 1013769.43 00:09:27.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004651.95 1000222.25 1042372.13 00:09:27.646 ======================================================== 00:09:27.646 Total : 256.00 0.12 1004264.81 1000202.69 1042372.13 00:09:27.646 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2245422 00:09:27.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2245422) - No such process 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2245422 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.646 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.646 rmmod nvme_tcp 00:09:27.646 rmmod nvme_fabrics 00:09:27.903 rmmod nvme_keyring 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2244878 ']' 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2244878 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2244878 ']' 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2244878 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2244878 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2244878' 00:09:27.903 killing process with pid 2244878 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2244878 00:09:27.903 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2244878 00:09:28.162 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.162 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.162 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.162 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.162 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.162 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.162 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.162 10:22:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.065 10:22:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.065 00:09:30.065 real 0m12.211s 00:09:30.065 user 0m27.866s 00:09:30.065 sys 0m2.888s 00:09:30.065 10:22:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.065 10:22:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.065 ************************************ 00:09:30.065 END TEST nvmf_delete_subsystem 00:09:30.065 ************************************ 00:09:30.065 10:22:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:30.065 10:22:24 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:30.065 10:22:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.065 10:22:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.065 10:22:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.324 ************************************ 00:09:30.324 START TEST nvmf_ns_masking 00:09:30.324 ************************************ 00:09:30.324 10:22:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:30.324 * Looking for test storage... 00:09:30.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.324 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.324 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:30.324 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.324 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.324 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6170d96f-0239-4517-934b-04771e018425 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=79082c84-6894-4c97-96f0-b0f0fcd3171c 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=73b1c505-e823-4705-91a3-8072ecba6212 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:30.325 10:22:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:32.233 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:32.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:32.233 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.233 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:32.234 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:32.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:09:32.234 00:09:32.234 --- 10.0.0.2 ping statistics --- 00:09:32.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.234 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:09:32.234 00:09:32.234 --- 10.0.0.1 ping statistics --- 00:09:32.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.234 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2247769 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2247769 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2247769 ']' 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.234 10:22:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:32.494 [2024-07-15 10:22:26.912567] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:32.494 [2024-07-15 10:22:26.912635] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.494 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.494 [2024-07-15 10:22:26.978362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.494 [2024-07-15 10:22:27.094409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.494 [2024-07-15 10:22:27.094472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.494 [2024-07-15 10:22:27.094495] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.494 [2024-07-15 10:22:27.094506] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.494 [2024-07-15 10:22:27.094515] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.494 [2024-07-15 10:22:27.094540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.754 10:22:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.754 10:22:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:32.754 10:22:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.754 10:22:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:32.754 10:22:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:32.754 10:22:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.754 10:22:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.013 [2024-07-15 10:22:27.472562] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.013 10:22:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:33.013 10:22:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:33.013 10:22:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:33.273 Malloc1 00:09:33.273 10:22:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:33.531 Malloc2 00:09:33.531 10:22:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:33.789 10:22:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:34.048 10:22:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.307 [2024-07-15 10:22:28.773971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.307 10:22:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:34.307 10:22:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73b1c505-e823-4705-91a3-8072ecba6212 -a 10.0.0.2 -s 4420 -i 4 00:09:34.307 10:22:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.307 10:22:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:34.307 10:22:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.307 10:22:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:34.307 10:22:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:36.844 [ 0]:0x1 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:36.844 10:22:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7291dd2352f49b6a0a8bfa685f51864 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7291dd2352f49b6a0a8bfa685f51864 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:36.844 [ 0]:0x1 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7291dd2352f49b6a0a8bfa685f51864 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7291dd2352f49b6a0a8bfa685f51864 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:36.844 [ 1]:0x2 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d3161f836a74ed5b468b11decb2bc51 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d3161f836a74ed5b468b11decb2bc51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.844 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.102 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:37.361 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:37.361 10:22:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73b1c505-e823-4705-91a3-8072ecba6212 -a 10.0.0.2 -s 4420 -i 4 00:09:37.621 10:22:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:37.621 10:22:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:37.621 10:22:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.621 10:22:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:37.621 10:22:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:37.621 10:22:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:40.155 [ 0]:0x2 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d3161f836a74ed5b468b11decb2bc51 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d3161f836a74ed5b468b11decb2bc51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:40.155 [ 0]:0x1 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:40.155 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:40.156 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7291dd2352f49b6a0a8bfa685f51864 00:09:40.156 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7291dd2352f49b6a0a8bfa685f51864 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.156 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:40.156 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:40.156 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:40.156 [ 1]:0x2 00:09:40.156 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:40.156 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:40.156 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d3161f836a74ed5b468b11decb2bc51 00:09:40.156 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d3161f836a74ed5b468b11decb2bc51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.156 10:22:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:40.414 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:40.672 [ 0]:0x2 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:40.672 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d3161f836a74ed5b468b11decb2bc51 00:09:40.673 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d3161f836a74ed5b468b11decb2bc51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.673 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:40.673 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.673 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:40.932 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:40.932 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73b1c505-e823-4705-91a3-8072ecba6212 -a 10.0.0.2 -s 4420 -i 4 00:09:41.198 10:22:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:41.198 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:41.198 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.198 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:41.198 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:41.198 10:22:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:43.134 10:22:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:43.134 10:22:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:43.134 10:22:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.134 10:22:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:43.134 10:22:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.134 10:22:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:43.135 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:43.135 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:43.135 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:43.135 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:43.135 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:43.135 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:43.135 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:43.393 [ 0]:0x1 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a7291dd2352f49b6a0a8bfa685f51864 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a7291dd2352f49b6a0a8bfa685f51864 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:43.393 [ 1]:0x2 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d3161f836a74ed5b468b11decb2bc51 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d3161f836a74ed5b468b11decb2bc51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.393 10:22:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:43.652 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:43.912 [ 0]:0x2 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d3161f836a74ed5b468b11decb2bc51 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d3161f836a74ed5b468b11decb2bc51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:43.912 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:44.172 [2024-07-15 10:22:38.583524] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:44.172 request: 00:09:44.172 { 00:09:44.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.172 "nsid": 2, 00:09:44.172 "host": "nqn.2016-06.io.spdk:host1", 00:09:44.172 "method": "nvmf_ns_remove_host", 00:09:44.172 "req_id": 1 00:09:44.172 } 00:09:44.172 Got JSON-RPC error response 00:09:44.172 response: 00:09:44.172 { 00:09:44.172 "code": -32602, 00:09:44.172 "message": "Invalid parameters" 00:09:44.172 } 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:44.172 [ 0]:0x2 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d3161f836a74ed5b468b11decb2bc51 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d3161f836a74ed5b468b11decb2bc51 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:44.172 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.431 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2249396 00:09:44.431 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:44.431 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.431 10:22:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2249396 /var/tmp/host.sock 00:09:44.431 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2249396 ']' 00:09:44.431 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:44.431 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.431 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:44.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:44.431 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.431 10:22:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:44.431 [2024-07-15 10:22:38.920414] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:44.431 [2024-07-15 10:22:38.920505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249396 ] 00:09:44.431 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.431 [2024-07-15 10:22:38.987772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.691 [2024-07-15 10:22:39.107612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.257 10:22:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.257 10:22:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:45.257 10:22:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.516 10:22:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:46.082 10:22:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6170d96f-0239-4517-934b-04771e018425 00:09:46.082 10:22:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:46.082 10:22:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6170D96F02394517934B04771E018425 -i 00:09:46.340 10:22:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 79082c84-6894-4c97-96f0-b0f0fcd3171c 00:09:46.340 10:22:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:46.340 10:22:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 79082C8468944C9796F0B0F0FCD3171C -i 00:09:46.598 10:22:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:46.856 10:22:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:47.115 10:22:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:47.115 10:22:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:47.373 nvme0n1 00:09:47.373 10:22:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:47.373 10:22:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:47.940 nvme1n2 00:09:47.940 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:47.940 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:47.940 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:47.940 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:47.940 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:47.940 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:47.940 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:47.940 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:47.940 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:48.198 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6170d96f-0239-4517-934b-04771e018425 == \6\1\7\0\d\9\6\f\-\0\2\3\9\-\4\5\1\7\-\9\3\4\b\-\0\4\7\7\1\e\0\1\8\4\2\5 ]] 00:09:48.198 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:48.198 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:48.198 10:22:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:48.457 10:22:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 79082c84-6894-4c97-96f0-b0f0fcd3171c == \7\9\0\8\2\c\8\4\-\6\8\9\4\-\4\c\9\7\-\9\6\f\0\-\b\0\f\0\f\c\d\3\1\7\1\c ]] 00:09:48.457 10:22:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2249396 00:09:48.458 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2249396 ']' 00:09:48.458 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2249396 00:09:48.458 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:48.458 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:48.458 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2249396 00:09:48.717 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:48.717 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:48.717 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2249396' 00:09:48.717 killing process with pid 2249396 00:09:48.717 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2249396 00:09:48.717 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2249396 00:09:48.975 10:22:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.234 10:22:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:49.234 10:22:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:49.234 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.234 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:49.234 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.234 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:49.234 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.234 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.234 rmmod nvme_tcp 00:09:49.492 rmmod nvme_fabrics 00:09:49.492 rmmod nvme_keyring 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2247769 ']' 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2247769 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2247769 ']' 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2247769 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2247769 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2247769' 00:09:49.492 killing process with pid 2247769 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2247769 00:09:49.492 10:22:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2247769 00:09:49.750 10:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.750 10:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.750 10:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.750 10:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.750 10:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.750 10:22:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.750 10:22:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.750 10:22:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.284 10:22:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.284 00:09:52.284 real 0m21.600s 00:09:52.284 user 0m28.701s 00:09:52.284 sys 0m4.125s 00:09:52.284 10:22:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.284 10:22:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:52.284 ************************************ 00:09:52.284 END TEST nvmf_ns_masking 00:09:52.284 ************************************ 00:09:52.284 10:22:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:52.284 10:22:46 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:52.284 10:22:46 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:52.284 10:22:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.284 10:22:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.284 10:22:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.284 ************************************ 00:09:52.284 START TEST nvmf_nvme_cli 00:09:52.284 ************************************ 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:52.284 * Looking for test storage... 00:09:52.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.284 10:22:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.285 10:22:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:54.187 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:54.187 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:54.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:54.187 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.187 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:09:54.188 00:09:54.188 --- 10.0.0.2 ping statistics --- 00:09:54.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.188 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:09:54.188 00:09:54.188 --- 10.0.0.1 ping statistics --- 00:09:54.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.188 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2251894 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2251894 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2251894 ']' 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.188 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.188 [2024-07-15 10:22:48.567952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:54.188 [2024-07-15 10:22:48.568033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.188 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.188 [2024-07-15 10:22:48.633537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.188 [2024-07-15 10:22:48.743279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.188 [2024-07-15 10:22:48.743342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.188 [2024-07-15 10:22:48.743355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.188 [2024-07-15 10:22:48.743366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.188 [2024-07-15 10:22:48.743375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.188 [2024-07-15 10:22:48.743507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.188 [2024-07-15 10:22:48.743573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.188 [2024-07-15 10:22:48.743637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.188 [2024-07-15 10:22:48.743640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 [2024-07-15 10:22:48.900782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 Malloc0 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 Malloc1 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 [2024-07-15 10:22:48.986208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.447 10:22:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:54.447 00:09:54.447 Discovery Log Number of Records 2, Generation counter 2 00:09:54.447 =====Discovery Log Entry 0====== 00:09:54.447 trtype: tcp 00:09:54.447 adrfam: ipv4 00:09:54.447 subtype: current discovery subsystem 00:09:54.447 treq: not required 00:09:54.447 portid: 0 00:09:54.447 trsvcid: 4420 00:09:54.447 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:54.447 traddr: 10.0.0.2 00:09:54.447 eflags: explicit discovery connections, duplicate discovery information 00:09:54.447 sectype: none 00:09:54.448 =====Discovery Log Entry 1====== 00:09:54.448 trtype: tcp 00:09:54.448 adrfam: ipv4 00:09:54.448 subtype: nvme subsystem 00:09:54.448 treq: not required 00:09:54.448 portid: 0 00:09:54.448 trsvcid: 4420 00:09:54.448 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:54.448 traddr: 10.0.0.2 00:09:54.448 eflags: none 00:09:54.448 sectype: none 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:54.448 10:22:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:55.383 10:22:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:55.383 10:22:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:55.383 10:22:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.383 10:22:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:55.383 10:22:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:55.383 10:22:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:57.336 /dev/nvme0n1 ]] 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:57.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.336 rmmod nvme_tcp 00:09:57.336 rmmod nvme_fabrics 00:09:57.336 rmmod nvme_keyring 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2251894 ']' 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2251894 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2251894 ']' 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2251894 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.336 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2251894 00:09:57.595 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:57.595 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:57.595 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2251894' 00:09:57.595 killing process with pid 2251894 00:09:57.595 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2251894 00:09:57.595 10:22:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2251894 00:09:57.854 10:22:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.854 10:22:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.854 10:22:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.854 10:22:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.854 10:22:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.854 10:22:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.854 10:22:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.854 10:22:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.767 10:22:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.767 00:09:59.767 real 0m7.978s 00:09:59.767 user 0m14.499s 00:09:59.767 sys 0m2.150s 00:09:59.767 10:22:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.767 10:22:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:59.767 ************************************ 00:09:59.767 END TEST nvmf_nvme_cli 00:09:59.767 ************************************ 00:09:59.767 10:22:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:59.767 10:22:54 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:59.767 10:22:54 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:59.767 10:22:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:59.767 10:22:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.767 10:22:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.767 ************************************ 00:09:59.767 START TEST nvmf_vfio_user 00:09:59.767 ************************************ 00:09:59.767 10:22:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:00.025 * Looking for test storage... 00:10:00.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.025 10:22:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2252809 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2252809' 00:10:00.026 Process pid: 2252809 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2252809 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2252809 ']' 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.026 10:22:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:00.026 [2024-07-15 10:22:54.536619] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:00.026 [2024-07-15 10:22:54.536714] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.026 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.026 [2024-07-15 10:22:54.598366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.284 [2024-07-15 10:22:54.706164] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.284 [2024-07-15 10:22:54.706219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.284 [2024-07-15 10:22:54.706249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.284 [2024-07-15 10:22:54.706260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.284 [2024-07-15 10:22:54.706270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.284 [2024-07-15 10:22:54.706319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.284 [2024-07-15 10:22:54.707912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.284 [2024-07-15 10:22:54.707997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.284 [2024-07-15 10:22:54.708000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.284 10:22:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.284 10:22:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:00.284 10:22:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:01.218 10:22:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:01.476 10:22:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:01.476 10:22:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:01.476 10:22:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:01.476 10:22:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:01.476 10:22:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:01.734 Malloc1 00:10:01.734 10:22:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:01.992 10:22:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:02.250 10:22:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:02.507 10:22:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:02.507 10:22:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:02.507 10:22:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:02.766 Malloc2 00:10:02.766 10:22:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:03.023 10:22:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:03.281 10:22:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:03.539 10:22:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:03.539 10:22:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:03.539 10:22:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:03.539 10:22:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:03.539 10:22:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:03.539 10:22:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:03.539 [2024-07-15 10:22:58.185643] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:03.539 [2024-07-15 10:22:58.185685] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253236 ] 00:10:03.800 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.800 [2024-07-15 10:22:58.220100] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:03.800 [2024-07-15 10:22:58.222639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:03.800 [2024-07-15 10:22:58.222666] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f66e5848000 00:10:03.800 [2024-07-15 10:22:58.223624] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:03.800 [2024-07-15 10:22:58.224620] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:03.800 [2024-07-15 10:22:58.225622] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:03.800 [2024-07-15 10:22:58.226628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:03.800 [2024-07-15 10:22:58.227630] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:03.800 [2024-07-15 10:22:58.228638] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:03.800 [2024-07-15 10:22:58.229648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:03.800 [2024-07-15 10:22:58.230651] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:03.800 [2024-07-15 10:22:58.231659] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:03.800 [2024-07-15 10:22:58.231679] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f66e583d000 00:10:03.800 [2024-07-15 10:22:58.232792] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:03.800 [2024-07-15 10:22:58.246535] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:03.800 [2024-07-15 10:22:58.246575] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:03.800 [2024-07-15 10:22:58.255798] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:03.800 [2024-07-15 10:22:58.255868] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:03.800 [2024-07-15 10:22:58.255971] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:03.800 [2024-07-15 10:22:58.256004] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:03.800 [2024-07-15 10:22:58.256016] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:03.800 [2024-07-15 10:22:58.256794] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:03.800 [2024-07-15 10:22:58.256815] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:03.800 [2024-07-15 10:22:58.256828] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:03.800 [2024-07-15 10:22:58.257794] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:03.800 [2024-07-15 10:22:58.257813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:03.800 [2024-07-15 10:22:58.257827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:03.800 [2024-07-15 10:22:58.258805] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:03.800 [2024-07-15 10:22:58.258825] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:03.800 [2024-07-15 10:22:58.259809] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:03.800 [2024-07-15 10:22:58.259829] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:03.800 [2024-07-15 10:22:58.259839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:03.800 [2024-07-15 10:22:58.259850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:03.800 [2024-07-15 10:22:58.259976] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:03.800 [2024-07-15 10:22:58.259986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:03.800 [2024-07-15 10:22:58.259995] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:03.800 [2024-07-15 10:22:58.260815] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:03.800 [2024-07-15 10:22:58.261819] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:03.800 [2024-07-15 10:22:58.262830] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:03.800 [2024-07-15 10:22:58.263820] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:03.800 [2024-07-15 10:22:58.263952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:03.800 [2024-07-15 10:22:58.264837] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:03.800 [2024-07-15 10:22:58.264870] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:03.800 [2024-07-15 10:22:58.264891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:03.800 [2024-07-15 10:22:58.264918] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:03.800 [2024-07-15 10:22:58.264941] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:03.800 [2024-07-15 10:22:58.264970] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:03.800 [2024-07-15 10:22:58.264982] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:03.800 [2024-07-15 10:22:58.265002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:03.800 [2024-07-15 10:22:58.265059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:03.800 [2024-07-15 10:22:58.265077] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:03.800 [2024-07-15 10:22:58.265090] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:03.800 [2024-07-15 10:22:58.265100] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:03.800 [2024-07-15 10:22:58.265108] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:03.800 [2024-07-15 10:22:58.265116] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:03.800 [2024-07-15 10:22:58.265124] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:03.800 [2024-07-15 10:22:58.265132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.265197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.265220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.801 [2024-07-15 10:22:58.265249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.801 [2024-07-15 10:22:58.265261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.801 [2024-07-15 10:22:58.265273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:03.801 [2024-07-15 10:22:58.265281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.265324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.265335] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:03.801 [2024-07-15 10:22:58.265344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265355] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.265394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.265455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265471] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265484] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:03.801 [2024-07-15 10:22:58.265493] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:03.801 [2024-07-15 10:22:58.265502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.265515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.265533] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:03.801 [2024-07-15 10:22:58.265549] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265576] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:03.801 [2024-07-15 10:22:58.265584] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:03.801 [2024-07-15 10:22:58.265593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.265618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.265641] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265656] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265668] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:03.801 [2024-07-15 10:22:58.265676] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:03.801 [2024-07-15 10:22:58.265685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.265699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.265713] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265724] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265761] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265778] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:03.801 [2024-07-15 10:22:58.265786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:03.801 [2024-07-15 10:22:58.265794] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:03.801 [2024-07-15 10:22:58.265820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.265839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.265871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.265894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.265912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.265925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.265942] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.265955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.265979] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:03.801 [2024-07-15 10:22:58.265990] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:03.801 [2024-07-15 10:22:58.265996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:03.801 [2024-07-15 10:22:58.266003] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:03.801 [2024-07-15 10:22:58.266012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:03.801 [2024-07-15 10:22:58.266024] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:03.801 [2024-07-15 10:22:58.266032] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:03.801 [2024-07-15 10:22:58.266042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.266053] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:03.801 [2024-07-15 10:22:58.266061] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:03.801 [2024-07-15 10:22:58.266070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.266082] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:03.801 [2024-07-15 10:22:58.266090] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:03.801 [2024-07-15 10:22:58.266102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:03.801 [2024-07-15 10:22:58.266115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.266136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.266156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:03.801 [2024-07-15 10:22:58.266184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:03.801 ===================================================== 00:10:03.801 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:03.801 ===================================================== 00:10:03.801 Controller Capabilities/Features 00:10:03.801 ================================ 00:10:03.801 Vendor ID: 4e58 00:10:03.801 Subsystem Vendor ID: 4e58 00:10:03.801 Serial Number: SPDK1 00:10:03.801 Model Number: SPDK bdev Controller 00:10:03.801 Firmware Version: 24.09 00:10:03.801 Recommended Arb Burst: 6 00:10:03.802 IEEE OUI Identifier: 8d 6b 50 00:10:03.802 Multi-path I/O 00:10:03.802 May have multiple subsystem ports: Yes 00:10:03.802 May have multiple controllers: Yes 00:10:03.802 Associated with SR-IOV VF: No 00:10:03.802 Max Data Transfer Size: 131072 00:10:03.802 Max Number of Namespaces: 32 00:10:03.802 Max Number of I/O Queues: 127 00:10:03.802 NVMe Specification Version (VS): 1.3 00:10:03.802 NVMe Specification Version (Identify): 1.3 00:10:03.802 Maximum Queue Entries: 256 00:10:03.802 Contiguous Queues Required: Yes 00:10:03.802 Arbitration Mechanisms Supported 00:10:03.802 Weighted Round Robin: Not Supported 00:10:03.802 Vendor Specific: Not Supported 00:10:03.802 Reset Timeout: 15000 ms 00:10:03.802 Doorbell Stride: 4 bytes 00:10:03.802 NVM Subsystem Reset: Not Supported 00:10:03.802 Command Sets Supported 00:10:03.802 NVM Command Set: Supported 00:10:03.802 Boot Partition: Not Supported 00:10:03.802 Memory Page Size Minimum: 4096 bytes 00:10:03.802 Memory Page Size Maximum: 4096 bytes 00:10:03.802 Persistent Memory Region: Not Supported 00:10:03.802 Optional Asynchronous Events Supported 00:10:03.802 Namespace Attribute Notices: Supported 00:10:03.802 Firmware Activation Notices: Not Supported 00:10:03.802 ANA Change Notices: Not Supported 00:10:03.802 PLE Aggregate Log Change Notices: Not Supported 00:10:03.802 LBA Status Info Alert Notices: Not Supported 00:10:03.802 EGE Aggregate Log Change Notices: Not Supported 00:10:03.802 Normal NVM Subsystem Shutdown event: Not Supported 00:10:03.802 Zone Descriptor Change Notices: Not Supported 00:10:03.802 Discovery Log Change Notices: Not Supported 00:10:03.802 Controller Attributes 00:10:03.802 128-bit Host Identifier: Supported 00:10:03.802 Non-Operational Permissive Mode: Not Supported 00:10:03.802 NVM Sets: Not Supported 00:10:03.802 Read Recovery Levels: Not Supported 00:10:03.802 Endurance Groups: Not Supported 00:10:03.802 Predictable Latency Mode: Not Supported 00:10:03.802 Traffic Based Keep ALive: Not Supported 00:10:03.802 Namespace Granularity: Not Supported 00:10:03.802 SQ Associations: Not Supported 00:10:03.802 UUID List: Not Supported 00:10:03.802 Multi-Domain Subsystem: Not Supported 00:10:03.802 Fixed Capacity Management: Not Supported 00:10:03.802 Variable Capacity Management: Not Supported 00:10:03.802 Delete Endurance Group: Not Supported 00:10:03.802 Delete NVM Set: Not Supported 00:10:03.802 Extended LBA Formats Supported: Not Supported 00:10:03.802 Flexible Data Placement Supported: Not Supported 00:10:03.802 00:10:03.802 Controller Memory Buffer Support 00:10:03.802 ================================ 00:10:03.802 Supported: No 00:10:03.802 00:10:03.802 Persistent Memory Region Support 00:10:03.802 ================================ 00:10:03.802 Supported: No 00:10:03.802 00:10:03.802 Admin Command Set Attributes 00:10:03.802 ============================ 00:10:03.802 Security Send/Receive: Not Supported 00:10:03.802 Format NVM: Not Supported 00:10:03.802 Firmware Activate/Download: Not Supported 00:10:03.802 Namespace Management: Not Supported 00:10:03.802 Device Self-Test: Not Supported 00:10:03.802 Directives: Not Supported 00:10:03.802 NVMe-MI: Not Supported 00:10:03.802 Virtualization Management: Not Supported 00:10:03.802 Doorbell Buffer Config: Not Supported 00:10:03.802 Get LBA Status Capability: Not Supported 00:10:03.802 Command & Feature Lockdown Capability: Not Supported 00:10:03.802 Abort Command Limit: 4 00:10:03.802 Async Event Request Limit: 4 00:10:03.802 Number of Firmware Slots: N/A 00:10:03.802 Firmware Slot 1 Read-Only: N/A 00:10:03.802 Firmware Activation Without Reset: N/A 00:10:03.802 Multiple Update Detection Support: N/A 00:10:03.802 Firmware Update Granularity: No Information Provided 00:10:03.802 Per-Namespace SMART Log: No 00:10:03.802 Asymmetric Namespace Access Log Page: Not Supported 00:10:03.802 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:03.802 Command Effects Log Page: Supported 00:10:03.802 Get Log Page Extended Data: Supported 00:10:03.802 Telemetry Log Pages: Not Supported 00:10:03.802 Persistent Event Log Pages: Not Supported 00:10:03.802 Supported Log Pages Log Page: May Support 00:10:03.802 Commands Supported & Effects Log Page: Not Supported 00:10:03.802 Feature Identifiers & Effects Log Page:May Support 00:10:03.802 NVMe-MI Commands & Effects Log Page: May Support 00:10:03.802 Data Area 4 for Telemetry Log: Not Supported 00:10:03.802 Error Log Page Entries Supported: 128 00:10:03.802 Keep Alive: Supported 00:10:03.802 Keep Alive Granularity: 10000 ms 00:10:03.802 00:10:03.802 NVM Command Set Attributes 00:10:03.802 ========================== 00:10:03.802 Submission Queue Entry Size 00:10:03.802 Max: 64 00:10:03.802 Min: 64 00:10:03.802 Completion Queue Entry Size 00:10:03.802 Max: 16 00:10:03.802 Min: 16 00:10:03.802 Number of Namespaces: 32 00:10:03.802 Compare Command: Supported 00:10:03.802 Write Uncorrectable Command: Not Supported 00:10:03.802 Dataset Management Command: Supported 00:10:03.802 Write Zeroes Command: Supported 00:10:03.802 Set Features Save Field: Not Supported 00:10:03.802 Reservations: Not Supported 00:10:03.802 Timestamp: Not Supported 00:10:03.802 Copy: Supported 00:10:03.802 Volatile Write Cache: Present 00:10:03.802 Atomic Write Unit (Normal): 1 00:10:03.802 Atomic Write Unit (PFail): 1 00:10:03.802 Atomic Compare & Write Unit: 1 00:10:03.802 Fused Compare & Write: Supported 00:10:03.802 Scatter-Gather List 00:10:03.802 SGL Command Set: Supported (Dword aligned) 00:10:03.802 SGL Keyed: Not Supported 00:10:03.802 SGL Bit Bucket Descriptor: Not Supported 00:10:03.802 SGL Metadata Pointer: Not Supported 00:10:03.802 Oversized SGL: Not Supported 00:10:03.802 SGL Metadata Address: Not Supported 00:10:03.802 SGL Offset: Not Supported 00:10:03.802 Transport SGL Data Block: Not Supported 00:10:03.802 Replay Protected Memory Block: Not Supported 00:10:03.802 00:10:03.802 Firmware Slot Information 00:10:03.802 ========================= 00:10:03.802 Active slot: 1 00:10:03.802 Slot 1 Firmware Revision: 24.09 00:10:03.802 00:10:03.802 00:10:03.802 Commands Supported and Effects 00:10:03.802 ============================== 00:10:03.802 Admin Commands 00:10:03.802 -------------- 00:10:03.802 Get Log Page (02h): Supported 00:10:03.802 Identify (06h): Supported 00:10:03.802 Abort (08h): Supported 00:10:03.802 Set Features (09h): Supported 00:10:03.802 Get Features (0Ah): Supported 00:10:03.802 Asynchronous Event Request (0Ch): Supported 00:10:03.802 Keep Alive (18h): Supported 00:10:03.802 I/O Commands 00:10:03.802 ------------ 00:10:03.802 Flush (00h): Supported LBA-Change 00:10:03.802 Write (01h): Supported LBA-Change 00:10:03.802 Read (02h): Supported 00:10:03.802 Compare (05h): Supported 00:10:03.802 Write Zeroes (08h): Supported LBA-Change 00:10:03.802 Dataset Management (09h): Supported LBA-Change 00:10:03.802 Copy (19h): Supported LBA-Change 00:10:03.802 00:10:03.802 Error Log 00:10:03.802 ========= 00:10:03.802 00:10:03.802 Arbitration 00:10:03.802 =========== 00:10:03.803 Arbitration Burst: 1 00:10:03.803 00:10:03.803 Power Management 00:10:03.803 ================ 00:10:03.803 Number of Power States: 1 00:10:03.803 Current Power State: Power State #0 00:10:03.803 Power State #0: 00:10:03.803 Max Power: 0.00 W 00:10:03.803 Non-Operational State: Operational 00:10:03.803 Entry Latency: Not Reported 00:10:03.803 Exit Latency: Not Reported 00:10:03.803 Relative Read Throughput: 0 00:10:03.803 Relative Read Latency: 0 00:10:03.803 Relative Write Throughput: 0 00:10:03.803 Relative Write Latency: 0 00:10:03.803 Idle Power: Not Reported 00:10:03.803 Active Power: Not Reported 00:10:03.803 Non-Operational Permissive Mode: Not Supported 00:10:03.803 00:10:03.803 Health Information 00:10:03.803 ================== 00:10:03.803 Critical Warnings: 00:10:03.803 Available Spare Space: OK 00:10:03.803 Temperature: OK 00:10:03.803 Device Reliability: OK 00:10:03.803 Read Only: No 00:10:03.803 Volatile Memory Backup: OK 00:10:03.803 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:03.803 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:03.803 Available Spare: 0% 00:10:03.803 Available Sp[2024-07-15 10:22:58.266318] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:03.803 [2024-07-15 10:22:58.266334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:03.803 [2024-07-15 10:22:58.266380] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:03.803 [2024-07-15 10:22:58.266398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.803 [2024-07-15 10:22:58.266409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.803 [2024-07-15 10:22:58.266419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.803 [2024-07-15 10:22:58.266429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:03.803 [2024-07-15 10:22:58.266852] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:03.803 [2024-07-15 10:22:58.266895] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:03.803 [2024-07-15 10:22:58.267851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:03.803 [2024-07-15 10:22:58.267954] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:03.803 [2024-07-15 10:22:58.267970] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:03.803 [2024-07-15 10:22:58.268874] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:03.803 [2024-07-15 10:22:58.268904] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:03.803 [2024-07-15 10:22:58.268974] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:03.803 [2024-07-15 10:22:58.270919] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:03.803 are Threshold: 0% 00:10:03.803 Life Percentage Used: 0% 00:10:03.803 Data Units Read: 0 00:10:03.803 Data Units Written: 0 00:10:03.803 Host Read Commands: 0 00:10:03.803 Host Write Commands: 0 00:10:03.803 Controller Busy Time: 0 minutes 00:10:03.803 Power Cycles: 0 00:10:03.803 Power On Hours: 0 hours 00:10:03.803 Unsafe Shutdowns: 0 00:10:03.803 Unrecoverable Media Errors: 0 00:10:03.803 Lifetime Error Log Entries: 0 00:10:03.803 Warning Temperature Time: 0 minutes 00:10:03.803 Critical Temperature Time: 0 minutes 00:10:03.803 00:10:03.803 Number of Queues 00:10:03.803 ================ 00:10:03.803 Number of I/O Submission Queues: 127 00:10:03.803 Number of I/O Completion Queues: 127 00:10:03.803 00:10:03.803 Active Namespaces 00:10:03.803 ================= 00:10:03.803 Namespace ID:1 00:10:03.803 Error Recovery Timeout: Unlimited 00:10:03.803 Command Set Identifier: NVM (00h) 00:10:03.803 Deallocate: Supported 00:10:03.803 Deallocated/Unwritten Error: Not Supported 00:10:03.803 Deallocated Read Value: Unknown 00:10:03.803 Deallocate in Write Zeroes: Not Supported 00:10:03.803 Deallocated Guard Field: 0xFFFF 00:10:03.803 Flush: Supported 00:10:03.803 Reservation: Supported 00:10:03.803 Namespace Sharing Capabilities: Multiple Controllers 00:10:03.803 Size (in LBAs): 131072 (0GiB) 00:10:03.803 Capacity (in LBAs): 131072 (0GiB) 00:10:03.803 Utilization (in LBAs): 131072 (0GiB) 00:10:03.803 NGUID: 1F1263B286AD47419ACD2F536D98E0D3 00:10:03.803 UUID: 1f1263b2-86ad-4741-9acd-2f536d98e0d3 00:10:03.803 Thin Provisioning: Not Supported 00:10:03.803 Per-NS Atomic Units: Yes 00:10:03.803 Atomic Boundary Size (Normal): 0 00:10:03.803 Atomic Boundary Size (PFail): 0 00:10:03.803 Atomic Boundary Offset: 0 00:10:03.803 Maximum Single Source Range Length: 65535 00:10:03.803 Maximum Copy Length: 65535 00:10:03.803 Maximum Source Range Count: 1 00:10:03.803 NGUID/EUI64 Never Reused: No 00:10:03.803 Namespace Write Protected: No 00:10:03.803 Number of LBA Formats: 1 00:10:03.803 Current LBA Format: LBA Format #00 00:10:03.803 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:03.803 00:10:03.803 10:22:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:03.803 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.064 [2024-07-15 10:22:58.501744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:09.338 Initializing NVMe Controllers 00:10:09.338 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:09.338 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:09.338 Initialization complete. Launching workers. 00:10:09.338 ======================================================== 00:10:09.338 Latency(us) 00:10:09.338 Device Information : IOPS MiB/s Average min max 00:10:09.338 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34778.86 135.85 3679.86 1173.64 7531.96 00:10:09.338 ======================================================== 00:10:09.338 Total : 34778.86 135.85 3679.86 1173.64 7531.96 00:10:09.338 00:10:09.338 [2024-07-15 10:23:03.524956] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:09.338 10:23:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:09.338 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.338 [2024-07-15 10:23:03.769129] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:14.607 Initializing NVMe Controllers 00:10:14.607 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:14.607 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:14.607 Initialization complete. Launching workers. 00:10:14.607 ======================================================== 00:10:14.607 Latency(us) 00:10:14.607 Device Information : IOPS MiB/s Average min max 00:10:14.607 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8006.60 4998.12 15977.73 00:10:14.607 ======================================================== 00:10:14.607 Total : 16000.00 62.50 8006.60 4998.12 15977.73 00:10:14.607 00:10:14.607 [2024-07-15 10:23:08.803983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:14.607 10:23:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:14.607 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.607 [2024-07-15 10:23:09.014060] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:19.927 [2024-07-15 10:23:14.093295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:19.927 Initializing NVMe Controllers 00:10:19.927 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:19.927 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:19.927 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:19.927 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:19.927 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:19.927 Initialization complete. Launching workers. 00:10:19.927 Starting thread on core 2 00:10:19.927 Starting thread on core 3 00:10:19.927 Starting thread on core 1 00:10:19.927 10:23:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:19.927 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.927 [2024-07-15 10:23:14.403345] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:23.212 [2024-07-15 10:23:17.481687] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:23.212 Initializing NVMe Controllers 00:10:23.212 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:23.212 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:23.212 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:23.212 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:23.212 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:23.212 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:23.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:23.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:23.212 Initialization complete. Launching workers. 00:10:23.212 Starting thread on core 1 with urgent priority queue 00:10:23.212 Starting thread on core 2 with urgent priority queue 00:10:23.212 Starting thread on core 3 with urgent priority queue 00:10:23.212 Starting thread on core 0 with urgent priority queue 00:10:23.212 SPDK bdev Controller (SPDK1 ) core 0: 5300.00 IO/s 18.87 secs/100000 ios 00:10:23.212 SPDK bdev Controller (SPDK1 ) core 1: 5036.33 IO/s 19.86 secs/100000 ios 00:10:23.212 SPDK bdev Controller (SPDK1 ) core 2: 5080.00 IO/s 19.69 secs/100000 ios 00:10:23.212 SPDK bdev Controller (SPDK1 ) core 3: 5182.67 IO/s 19.30 secs/100000 ios 00:10:23.212 ======================================================== 00:10:23.212 00:10:23.212 10:23:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:23.212 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.212 [2024-07-15 10:23:17.780989] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:23.212 Initializing NVMe Controllers 00:10:23.212 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:23.212 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:23.212 Namespace ID: 1 size: 0GB 00:10:23.212 Initialization complete. 00:10:23.212 INFO: using host memory buffer for IO 00:10:23.212 Hello world! 00:10:23.212 [2024-07-15 10:23:17.816573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:23.212 10:23:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:23.472 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.472 [2024-07-15 10:23:18.102346] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:24.851 Initializing NVMe Controllers 00:10:24.851 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:24.851 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:24.851 Initialization complete. Launching workers. 00:10:24.851 submit (in ns) avg, min, max = 9529.4, 3532.2, 4016908.9 00:10:24.851 complete (in ns) avg, min, max = 23563.7, 2063.3, 4016183.3 00:10:24.851 00:10:24.851 Submit histogram 00:10:24.851 ================ 00:10:24.851 Range in us Cumulative Count 00:10:24.851 3.532 - 3.556: 0.2998% ( 41) 00:10:24.851 3.556 - 3.579: 0.8993% ( 82) 00:10:24.851 3.579 - 3.603: 3.1293% ( 305) 00:10:24.851 3.603 - 3.627: 6.7486% ( 495) 00:10:24.851 3.627 - 3.650: 13.1096% ( 870) 00:10:24.851 3.650 - 3.674: 21.0572% ( 1087) 00:10:24.851 3.674 - 3.698: 30.4087% ( 1279) 00:10:24.851 3.698 - 3.721: 39.1387% ( 1194) 00:10:24.851 3.721 - 3.745: 47.6640% ( 1166) 00:10:24.851 3.745 - 3.769: 53.8934% ( 852) 00:10:24.851 3.769 - 3.793: 59.5233% ( 770) 00:10:24.851 3.793 - 3.816: 63.6178% ( 560) 00:10:24.851 3.816 - 3.840: 67.3174% ( 506) 00:10:24.851 3.840 - 3.864: 71.0170% ( 506) 00:10:24.851 3.864 - 3.887: 74.4169% ( 465) 00:10:24.851 3.887 - 3.911: 77.8460% ( 469) 00:10:24.851 3.911 - 3.935: 81.6334% ( 518) 00:10:24.851 3.935 - 3.959: 84.5946% ( 405) 00:10:24.851 3.959 - 3.982: 86.7661% ( 297) 00:10:24.851 3.982 - 4.006: 88.8353% ( 283) 00:10:24.851 4.006 - 4.030: 90.4804% ( 225) 00:10:24.851 4.030 - 4.053: 92.0012% ( 208) 00:10:24.851 4.053 - 4.077: 93.0248% ( 140) 00:10:24.851 4.077 - 4.101: 94.1435% ( 153) 00:10:24.851 4.101 - 4.124: 94.9112% ( 105) 00:10:24.851 4.124 - 4.148: 95.6058% ( 95) 00:10:24.851 4.148 - 4.172: 96.0518% ( 61) 00:10:24.851 4.172 - 4.196: 96.3808% ( 45) 00:10:24.851 4.196 - 4.219: 96.6148% ( 32) 00:10:24.851 4.219 - 4.243: 96.7464% ( 18) 00:10:24.851 4.243 - 4.267: 96.8560% ( 15) 00:10:24.851 4.267 - 4.290: 96.9584% ( 14) 00:10:24.851 4.290 - 4.314: 97.0681% ( 15) 00:10:24.851 4.314 - 4.338: 97.1558% ( 12) 00:10:24.851 4.338 - 4.361: 97.2435% ( 12) 00:10:24.851 4.361 - 4.385: 97.3020% ( 8) 00:10:24.851 4.385 - 4.409: 97.3532% ( 7) 00:10:24.851 4.409 - 4.433: 97.3898% ( 5) 00:10:24.851 4.456 - 4.480: 97.4190% ( 4) 00:10:24.851 4.480 - 4.504: 97.4263% ( 1) 00:10:24.851 4.504 - 4.527: 97.4336% ( 1) 00:10:24.851 4.527 - 4.551: 97.4410% ( 1) 00:10:24.851 4.575 - 4.599: 97.4483% ( 1) 00:10:24.851 4.599 - 4.622: 97.4556% ( 1) 00:10:24.851 4.622 - 4.646: 97.4702% ( 2) 00:10:24.851 4.646 - 4.670: 97.4775% ( 1) 00:10:24.851 4.670 - 4.693: 97.4921% ( 2) 00:10:24.851 4.693 - 4.717: 97.4995% ( 1) 00:10:24.851 4.717 - 4.741: 97.5214% ( 3) 00:10:24.851 4.741 - 4.764: 97.5506% ( 4) 00:10:24.851 4.764 - 4.788: 97.6018% ( 7) 00:10:24.851 4.788 - 4.812: 97.6384% ( 5) 00:10:24.851 4.812 - 4.836: 97.6530% ( 2) 00:10:24.851 4.836 - 4.859: 97.7042% ( 7) 00:10:24.851 4.859 - 4.883: 97.7480% ( 6) 00:10:24.851 4.883 - 4.907: 97.7773% ( 4) 00:10:24.851 4.907 - 4.930: 97.8138% ( 5) 00:10:24.851 4.930 - 4.954: 97.8358% ( 3) 00:10:24.851 4.954 - 4.978: 97.8504% ( 2) 00:10:24.851 4.978 - 5.001: 97.9016% ( 7) 00:10:24.851 5.001 - 5.025: 97.9235% ( 3) 00:10:24.852 5.025 - 5.049: 97.9381% ( 2) 00:10:24.852 5.049 - 5.073: 97.9528% ( 2) 00:10:24.852 5.073 - 5.096: 97.9893% ( 5) 00:10:24.852 5.120 - 5.144: 98.0039% ( 2) 00:10:24.852 5.167 - 5.191: 98.0259% ( 3) 00:10:24.852 5.191 - 5.215: 98.0405% ( 2) 00:10:24.852 5.215 - 5.239: 98.0551% ( 2) 00:10:24.852 5.262 - 5.286: 98.0771% ( 3) 00:10:24.852 5.286 - 5.310: 98.0917% ( 2) 00:10:24.852 5.310 - 5.333: 98.0990% ( 1) 00:10:24.852 5.428 - 5.452: 98.1136% ( 2) 00:10:24.852 5.452 - 5.476: 98.1209% ( 1) 00:10:24.852 5.499 - 5.523: 98.1282% ( 1) 00:10:24.852 5.713 - 5.736: 98.1356% ( 1) 00:10:24.852 5.760 - 5.784: 98.1429% ( 1) 00:10:24.852 5.831 - 5.855: 98.1502% ( 1) 00:10:24.852 5.902 - 5.926: 98.1575% ( 1) 00:10:24.852 5.926 - 5.950: 98.1648% ( 1) 00:10:24.852 5.973 - 5.997: 98.1721% ( 1) 00:10:24.852 5.997 - 6.021: 98.1794% ( 1) 00:10:24.852 6.021 - 6.044: 98.1867% ( 1) 00:10:24.852 6.044 - 6.068: 98.2014% ( 2) 00:10:24.852 6.068 - 6.116: 98.2087% ( 1) 00:10:24.852 6.116 - 6.163: 98.2233% ( 2) 00:10:24.852 6.163 - 6.210: 98.2306% ( 1) 00:10:24.852 6.210 - 6.258: 98.2525% ( 3) 00:10:24.852 6.258 - 6.305: 98.2599% ( 1) 00:10:24.852 6.447 - 6.495: 98.2745% ( 2) 00:10:24.852 6.495 - 6.542: 98.2818% ( 1) 00:10:24.852 6.590 - 6.637: 98.2891% ( 1) 00:10:24.852 6.637 - 6.684: 98.2964% ( 1) 00:10:24.852 6.732 - 6.779: 98.3037% ( 1) 00:10:24.852 6.779 - 6.827: 98.3257% ( 3) 00:10:24.852 6.874 - 6.921: 98.3330% ( 1) 00:10:24.852 6.921 - 6.969: 98.3476% ( 2) 00:10:24.852 7.159 - 7.206: 98.3549% ( 1) 00:10:24.852 7.253 - 7.301: 98.3695% ( 2) 00:10:24.852 7.301 - 7.348: 98.3841% ( 2) 00:10:24.852 7.348 - 7.396: 98.3988% ( 2) 00:10:24.852 7.396 - 7.443: 98.4061% ( 1) 00:10:24.852 7.538 - 7.585: 98.4207% ( 2) 00:10:24.852 7.585 - 7.633: 98.4426% ( 3) 00:10:24.852 7.633 - 7.680: 98.4500% ( 1) 00:10:24.852 7.680 - 7.727: 98.4646% ( 2) 00:10:24.852 7.775 - 7.822: 98.4792% ( 2) 00:10:24.852 8.012 - 8.059: 98.4938% ( 2) 00:10:24.852 8.059 - 8.107: 98.5011% ( 1) 00:10:24.852 8.154 - 8.201: 98.5084% ( 1) 00:10:24.852 8.296 - 8.344: 98.5158% ( 1) 00:10:24.852 8.344 - 8.391: 98.5231% ( 1) 00:10:24.852 8.486 - 8.533: 98.5377% ( 2) 00:10:24.852 8.533 - 8.581: 98.5450% ( 1) 00:10:24.852 8.770 - 8.818: 98.5596% ( 2) 00:10:24.852 8.818 - 8.865: 98.5669% ( 1) 00:10:24.852 8.865 - 8.913: 98.5889% ( 3) 00:10:24.852 9.102 - 9.150: 98.5962% ( 1) 00:10:24.852 9.150 - 9.197: 98.6108% ( 2) 00:10:24.852 9.197 - 9.244: 98.6181% ( 1) 00:10:24.852 9.244 - 9.292: 98.6254% ( 1) 00:10:24.852 9.339 - 9.387: 98.6327% ( 1) 00:10:24.852 9.434 - 9.481: 98.6547% ( 3) 00:10:24.852 9.813 - 9.861: 98.6620% ( 1) 00:10:24.852 9.861 - 9.908: 98.6693% ( 1) 00:10:24.852 10.003 - 10.050: 98.6766% ( 1) 00:10:24.852 10.050 - 10.098: 98.6839% ( 1) 00:10:24.852 10.145 - 10.193: 98.6912% ( 1) 00:10:24.852 10.287 - 10.335: 98.6985% ( 1) 00:10:24.852 10.382 - 10.430: 98.7059% ( 1) 00:10:24.852 10.524 - 10.572: 98.7132% ( 1) 00:10:24.852 10.619 - 10.667: 98.7205% ( 1) 00:10:24.852 10.667 - 10.714: 98.7351% ( 2) 00:10:24.852 10.904 - 10.951: 98.7424% ( 1) 00:10:24.852 10.951 - 10.999: 98.7497% ( 1) 00:10:24.852 10.999 - 11.046: 98.7570% ( 1) 00:10:24.852 11.046 - 11.093: 98.7643% ( 1) 00:10:24.852 11.378 - 11.425: 98.7790% ( 2) 00:10:24.852 11.567 - 11.615: 98.7863% ( 1) 00:10:24.852 11.615 - 11.662: 98.7936% ( 1) 00:10:24.852 11.804 - 11.852: 98.8009% ( 1) 00:10:24.852 11.852 - 11.899: 98.8082% ( 1) 00:10:24.852 12.041 - 12.089: 98.8302% ( 3) 00:10:24.852 12.089 - 12.136: 98.8448% ( 2) 00:10:24.852 12.231 - 12.326: 98.8521% ( 1) 00:10:24.852 12.326 - 12.421: 98.8667% ( 2) 00:10:24.852 12.610 - 12.705: 98.8740% ( 1) 00:10:24.852 12.895 - 12.990: 98.8813% ( 1) 00:10:24.852 13.084 - 13.179: 98.9033% ( 3) 00:10:24.852 13.179 - 13.274: 98.9106% ( 1) 00:10:24.852 13.274 - 13.369: 98.9179% ( 1) 00:10:24.852 13.559 - 13.653: 98.9252% ( 1) 00:10:24.852 13.843 - 13.938: 98.9325% ( 1) 00:10:24.852 13.938 - 14.033: 98.9398% ( 1) 00:10:24.852 14.033 - 14.127: 98.9544% ( 2) 00:10:24.852 14.127 - 14.222: 98.9618% ( 1) 00:10:24.852 14.507 - 14.601: 98.9691% ( 1) 00:10:24.852 14.791 - 14.886: 98.9764% ( 1) 00:10:24.852 14.981 - 15.076: 98.9837% ( 1) 00:10:24.852 15.170 - 15.265: 98.9910% ( 1) 00:10:24.852 15.550 - 15.644: 98.9983% ( 1) 00:10:24.852 15.739 - 15.834: 99.0056% ( 1) 00:10:24.852 15.834 - 15.929: 99.0129% ( 1) 00:10:24.852 16.972 - 17.067: 99.0203% ( 1) 00:10:24.852 17.351 - 17.446: 99.0349% ( 2) 00:10:24.852 17.446 - 17.541: 99.0568% ( 3) 00:10:24.852 17.541 - 17.636: 99.0861% ( 4) 00:10:24.852 17.636 - 17.730: 99.1372% ( 7) 00:10:24.852 17.730 - 17.825: 99.1592% ( 3) 00:10:24.852 17.825 - 17.920: 99.2250% ( 9) 00:10:24.852 17.920 - 18.015: 99.2323% ( 1) 00:10:24.852 18.015 - 18.110: 99.2688% ( 5) 00:10:24.852 18.110 - 18.204: 99.3054% ( 5) 00:10:24.852 18.204 - 18.299: 99.3566% ( 7) 00:10:24.852 18.299 - 18.394: 99.3931% ( 5) 00:10:24.852 18.394 - 18.489: 99.4736% ( 11) 00:10:24.852 18.489 - 18.584: 99.5394% ( 9) 00:10:24.852 18.584 - 18.679: 99.6052% ( 9) 00:10:24.852 18.679 - 18.773: 99.6490% ( 6) 00:10:24.852 18.773 - 18.868: 99.6929% ( 6) 00:10:24.852 18.868 - 18.963: 99.7148% ( 3) 00:10:24.852 18.963 - 19.058: 99.7295% ( 2) 00:10:24.852 19.058 - 19.153: 99.7441% ( 2) 00:10:24.852 19.153 - 19.247: 99.7807% ( 5) 00:10:24.852 19.342 - 19.437: 99.7880% ( 1) 00:10:24.852 19.532 - 19.627: 99.8026% ( 2) 00:10:24.852 19.721 - 19.816: 99.8099% ( 1) 00:10:24.852 19.816 - 19.911: 99.8172% ( 1) 00:10:24.852 20.764 - 20.859: 99.8245% ( 1) 00:10:24.852 20.954 - 21.049: 99.8318% ( 1) 00:10:24.852 21.049 - 21.144: 99.8391% ( 1) 00:10:24.852 22.471 - 22.566: 99.8465% ( 1) 00:10:24.852 23.419 - 23.514: 99.8538% ( 1) 00:10:24.852 29.582 - 29.772: 99.8611% ( 1) 00:10:24.852 3980.705 - 4004.978: 99.9708% ( 15) 00:10:24.852 4004.978 - 4029.250: 100.0000% ( 4) 00:10:24.852 00:10:24.852 Complete histogram 00:10:24.852 ================== 00:10:24.852 Range in us Cumulative Count 00:10:24.852 2.062 - 2.074: 13.9651% ( 1910) 00:10:24.852 2.074 - 2.086: 38.4295% ( 3346) 00:10:24.852 2.086 - 2.098: 39.9430% ( 207) 00:10:24.852 2.098 - 2.110: 54.4856% ( 1989) 00:10:24.852 2.110 - 2.121: 60.9052% ( 878) 00:10:24.852 2.121 - 2.133: 62.3967% ( 204) 00:10:24.852 2.133 - 2.145: 72.9107% ( 1438) 00:10:24.852 2.145 - 2.157: 78.2482% ( 730) 00:10:24.852 2.157 - 2.169: 79.6885% ( 197) 00:10:24.852 2.169 - 2.181: 85.4500% ( 788) 00:10:24.852 2.181 - 2.193: 87.3803% ( 264) 00:10:24.852 2.193 - 2.204: 88.1480% ( 105) 00:10:24.852 2.204 - 2.216: 90.0563% ( 261) 00:10:24.852 2.216 - 2.228: 92.0597% ( 274) 00:10:24.852 2.228 - 2.240: 93.3831% ( 181) 00:10:24.852 2.240 - 2.252: 94.4578% ( 147) 00:10:24.852 2.252 - 2.264: 94.8673% ( 56) 00:10:24.852 2.264 - 2.276: 95.0208% ( 21) 00:10:24.852 2.276 - 2.287: 95.2256% ( 28) 00:10:24.852 2.287 - 2.299: 95.6277% ( 55) 00:10:24.852 2.299 - 2.311: 95.9494% ( 44) 00:10:24.852 2.311 - 2.323: 96.1249% ( 24) 00:10:24.852 2.323 - 2.335: 96.1834% ( 8) 00:10:24.852 2.335 - 2.347: 96.4539% ( 37) 00:10:24.852 2.347 - 2.359: 96.7683% ( 43) 00:10:24.852 2.359 - 2.370: 97.0461% ( 38) 00:10:24.852 2.370 - 2.382: 97.4117% ( 50) 00:10:24.852 2.382 - 2.394: 97.7188% ( 42) 00:10:24.852 2.394 - 2.406: 97.9455% ( 31) 00:10:24.852 2.406 - 2.418: 98.0771% ( 18) 00:10:24.852 2.418 - 2.430: 98.1575% ( 11) 00:10:24.852 2.430 - 2.441: 98.2452% ( 12) 00:10:24.852 2.441 - 2.453: 98.3476% ( 14) 00:10:24.852 2.453 - 2.465: 98.4134% ( 9) 00:10:24.852 2.465 - 2.477: 98.4573% ( 6) 00:10:24.852 2.477 - 2.489: 98.5011% ( 6) 00:10:24.852 2.489 - 2.501: 98.5084% ( 1) 00:10:24.852 2.501 - 2.513: 98.5158% ( 1) 00:10:24.852 2.513 - 2.524: 98.5231% ( 1) 00:10:24.852 2.524 - 2.536: 98.5377% ( 2) 00:10:24.852 2.548 - 2.560: 98.5523% ( 2) 00:10:24.852 2.560 - 2.572: 98.5596% ( 1) 00:10:24.852 2.596 - 2.607: 98.5669% ( 1) 00:10:24.852 2.619 - 2.631: 98.5889% ( 3) 00:10:24.852 2.631 - 2.643: 98.6035% ( 2) 00:10:24.852 2.643 - 2.655: 98.6108% ( 1) 00:10:24.852 2.690 - 2.702: 98.6181% ( 1) 00:10:24.852 2.702 - 2.714: 98.6254% ( 1) 00:10:24.852 2.714 - 2.726: 98.6327% ( 1) 00:10:24.852 2.761 - 2.773: 98.6401% ( 1) 00:10:24.852 3.271 - 3.295: 98.6474% ( 1) 00:10:24.852 3.295 - 3.319: 98.6620% ( 2) 00:10:24.852 3.366 - 3.390: 98.6693% ( 1) 00:10:24.852 3.390 - 3.413: 98.6766% ( 1) 00:10:24.852 3.437 - 3.461: 98.6839% ( 1) 00:10:24.852 3.461 - 3.484: 98.6912% ( 1) 00:10:24.852 3.484 - 3.508: 9[2024-07-15 10:23:19.121379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:24.852 8.6985% ( 1) 00:10:24.852 3.532 - 3.556: 98.7059% ( 1) 00:10:24.852 3.556 - 3.579: 98.7132% ( 1) 00:10:24.852 3.627 - 3.650: 98.7278% ( 2) 00:10:24.852 3.650 - 3.674: 98.7351% ( 1) 00:10:24.853 3.674 - 3.698: 98.7424% ( 1) 00:10:24.853 3.698 - 3.721: 98.7570% ( 2) 00:10:24.853 3.721 - 3.745: 98.7717% ( 2) 00:10:24.853 3.745 - 3.769: 98.7790% ( 1) 00:10:24.853 3.864 - 3.887: 98.7863% ( 1) 00:10:24.853 3.935 - 3.959: 98.7936% ( 1) 00:10:24.853 4.101 - 4.124: 98.8009% ( 1) 00:10:24.853 4.978 - 5.001: 98.8082% ( 1) 00:10:24.853 5.049 - 5.073: 98.8155% ( 1) 00:10:24.853 5.191 - 5.215: 98.8228% ( 1) 00:10:24.853 5.499 - 5.523: 98.8302% ( 1) 00:10:24.853 5.879 - 5.902: 98.8375% ( 1) 00:10:24.853 5.902 - 5.926: 98.8448% ( 1) 00:10:24.853 5.950 - 5.973: 98.8521% ( 1) 00:10:24.853 6.021 - 6.044: 98.8594% ( 1) 00:10:24.853 6.116 - 6.163: 98.8667% ( 1) 00:10:24.853 6.258 - 6.305: 98.8740% ( 1) 00:10:24.853 6.400 - 6.447: 98.8813% ( 1) 00:10:24.853 6.495 - 6.542: 98.8886% ( 1) 00:10:24.853 6.542 - 6.590: 98.8960% ( 1) 00:10:24.853 6.590 - 6.637: 98.9033% ( 1) 00:10:24.853 6.779 - 6.827: 98.9106% ( 1) 00:10:24.853 7.064 - 7.111: 98.9179% ( 1) 00:10:24.853 7.964 - 8.012: 98.9252% ( 1) 00:10:24.853 8.107 - 8.154: 98.9325% ( 1) 00:10:24.853 8.628 - 8.676: 98.9398% ( 1) 00:10:24.853 8.676 - 8.723: 98.9471% ( 1) 00:10:24.853 9.102 - 9.150: 98.9544% ( 1) 00:10:24.853 10.999 - 11.046: 98.9618% ( 1) 00:10:24.853 15.455 - 15.550: 98.9837% ( 3) 00:10:24.853 15.644 - 15.739: 98.9910% ( 1) 00:10:24.853 15.739 - 15.834: 99.0129% ( 3) 00:10:24.853 15.834 - 15.929: 99.0349% ( 3) 00:10:24.853 15.929 - 16.024: 99.0568% ( 3) 00:10:24.853 16.024 - 16.119: 99.0787% ( 3) 00:10:24.853 16.119 - 16.213: 99.0861% ( 1) 00:10:24.853 16.213 - 16.308: 99.1226% ( 5) 00:10:24.853 16.308 - 16.403: 99.1592% ( 5) 00:10:24.853 16.403 - 16.498: 99.1738% ( 2) 00:10:24.853 16.498 - 16.593: 99.2323% ( 8) 00:10:24.853 16.593 - 16.687: 99.2542% ( 3) 00:10:24.853 16.687 - 16.782: 99.2762% ( 3) 00:10:24.853 16.782 - 16.877: 99.2981% ( 3) 00:10:24.853 16.877 - 16.972: 99.3346% ( 5) 00:10:24.853 16.972 - 17.067: 99.3566% ( 3) 00:10:24.853 17.067 - 17.161: 99.3712% ( 2) 00:10:24.853 17.161 - 17.256: 99.3785% ( 1) 00:10:24.853 17.256 - 17.351: 99.4005% ( 3) 00:10:24.853 17.446 - 17.541: 99.4078% ( 1) 00:10:24.853 17.541 - 17.636: 99.4224% ( 2) 00:10:24.853 17.825 - 17.920: 99.4297% ( 1) 00:10:24.853 17.920 - 18.015: 99.4370% ( 1) 00:10:24.853 18.015 - 18.110: 99.4443% ( 1) 00:10:24.853 18.204 - 18.299: 99.4516% ( 1) 00:10:24.853 18.299 - 18.394: 99.4589% ( 1) 00:10:24.853 18.868 - 18.963: 99.4663% ( 1) 00:10:24.853 3980.705 - 4004.978: 99.8684% ( 55) 00:10:24.853 4004.978 - 4029.250: 100.0000% ( 18) 00:10:24.853 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:24.853 [ 00:10:24.853 { 00:10:24.853 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:24.853 "subtype": "Discovery", 00:10:24.853 "listen_addresses": [], 00:10:24.853 "allow_any_host": true, 00:10:24.853 "hosts": [] 00:10:24.853 }, 00:10:24.853 { 00:10:24.853 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:24.853 "subtype": "NVMe", 00:10:24.853 "listen_addresses": [ 00:10:24.853 { 00:10:24.853 "trtype": "VFIOUSER", 00:10:24.853 "adrfam": "IPv4", 00:10:24.853 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:24.853 "trsvcid": "0" 00:10:24.853 } 00:10:24.853 ], 00:10:24.853 "allow_any_host": true, 00:10:24.853 "hosts": [], 00:10:24.853 "serial_number": "SPDK1", 00:10:24.853 "model_number": "SPDK bdev Controller", 00:10:24.853 "max_namespaces": 32, 00:10:24.853 "min_cntlid": 1, 00:10:24.853 "max_cntlid": 65519, 00:10:24.853 "namespaces": [ 00:10:24.853 { 00:10:24.853 "nsid": 1, 00:10:24.853 "bdev_name": "Malloc1", 00:10:24.853 "name": "Malloc1", 00:10:24.853 "nguid": "1F1263B286AD47419ACD2F536D98E0D3", 00:10:24.853 "uuid": "1f1263b2-86ad-4741-9acd-2f536d98e0d3" 00:10:24.853 } 00:10:24.853 ] 00:10:24.853 }, 00:10:24.853 { 00:10:24.853 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:24.853 "subtype": "NVMe", 00:10:24.853 "listen_addresses": [ 00:10:24.853 { 00:10:24.853 "trtype": "VFIOUSER", 00:10:24.853 "adrfam": "IPv4", 00:10:24.853 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:24.853 "trsvcid": "0" 00:10:24.853 } 00:10:24.853 ], 00:10:24.853 "allow_any_host": true, 00:10:24.853 "hosts": [], 00:10:24.853 "serial_number": "SPDK2", 00:10:24.853 "model_number": "SPDK bdev Controller", 00:10:24.853 "max_namespaces": 32, 00:10:24.853 "min_cntlid": 1, 00:10:24.853 "max_cntlid": 65519, 00:10:24.853 "namespaces": [ 00:10:24.853 { 00:10:24.853 "nsid": 1, 00:10:24.853 "bdev_name": "Malloc2", 00:10:24.853 "name": "Malloc2", 00:10:24.853 "nguid": "6886E9D3C0784261B4E0C3BB4AAAB4BE", 00:10:24.853 "uuid": "6886e9d3-c078-4261-b4e0-c3bb4aaab4be" 00:10:24.853 } 00:10:24.853 ] 00:10:24.853 } 00:10:24.853 ] 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2255770 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:24.853 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:24.853 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.111 [2024-07-15 10:23:19.588370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:25.111 Malloc3 00:10:25.111 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:25.369 [2024-07-15 10:23:19.939936] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:25.369 10:23:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:25.369 Asynchronous Event Request test 00:10:25.369 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:25.369 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:25.369 Registering asynchronous event callbacks... 00:10:25.369 Starting namespace attribute notice tests for all controllers... 00:10:25.369 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:25.369 aer_cb - Changed Namespace 00:10:25.369 Cleaning up... 00:10:25.628 [ 00:10:25.628 { 00:10:25.628 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:25.628 "subtype": "Discovery", 00:10:25.628 "listen_addresses": [], 00:10:25.628 "allow_any_host": true, 00:10:25.628 "hosts": [] 00:10:25.628 }, 00:10:25.628 { 00:10:25.628 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:25.628 "subtype": "NVMe", 00:10:25.628 "listen_addresses": [ 00:10:25.628 { 00:10:25.628 "trtype": "VFIOUSER", 00:10:25.628 "adrfam": "IPv4", 00:10:25.628 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:25.628 "trsvcid": "0" 00:10:25.628 } 00:10:25.628 ], 00:10:25.628 "allow_any_host": true, 00:10:25.628 "hosts": [], 00:10:25.628 "serial_number": "SPDK1", 00:10:25.628 "model_number": "SPDK bdev Controller", 00:10:25.628 "max_namespaces": 32, 00:10:25.628 "min_cntlid": 1, 00:10:25.628 "max_cntlid": 65519, 00:10:25.628 "namespaces": [ 00:10:25.628 { 00:10:25.628 "nsid": 1, 00:10:25.628 "bdev_name": "Malloc1", 00:10:25.628 "name": "Malloc1", 00:10:25.628 "nguid": "1F1263B286AD47419ACD2F536D98E0D3", 00:10:25.628 "uuid": "1f1263b2-86ad-4741-9acd-2f536d98e0d3" 00:10:25.628 }, 00:10:25.628 { 00:10:25.628 "nsid": 2, 00:10:25.628 "bdev_name": "Malloc3", 00:10:25.628 "name": "Malloc3", 00:10:25.628 "nguid": "3E5820200F9240EC932CF7358C5FBE89", 00:10:25.628 "uuid": "3e582020-0f92-40ec-932c-f7358c5fbe89" 00:10:25.628 } 00:10:25.628 ] 00:10:25.628 }, 00:10:25.628 { 00:10:25.628 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:25.628 "subtype": "NVMe", 00:10:25.628 "listen_addresses": [ 00:10:25.628 { 00:10:25.628 "trtype": "VFIOUSER", 00:10:25.628 "adrfam": "IPv4", 00:10:25.628 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:25.628 "trsvcid": "0" 00:10:25.628 } 00:10:25.628 ], 00:10:25.628 "allow_any_host": true, 00:10:25.628 "hosts": [], 00:10:25.628 "serial_number": "SPDK2", 00:10:25.628 "model_number": "SPDK bdev Controller", 00:10:25.628 "max_namespaces": 32, 00:10:25.628 "min_cntlid": 1, 00:10:25.628 "max_cntlid": 65519, 00:10:25.628 "namespaces": [ 00:10:25.628 { 00:10:25.628 "nsid": 1, 00:10:25.628 "bdev_name": "Malloc2", 00:10:25.628 "name": "Malloc2", 00:10:25.628 "nguid": "6886E9D3C0784261B4E0C3BB4AAAB4BE", 00:10:25.628 "uuid": "6886e9d3-c078-4261-b4e0-c3bb4aaab4be" 00:10:25.628 } 00:10:25.628 ] 00:10:25.628 } 00:10:25.628 ] 00:10:25.628 10:23:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2255770 00:10:25.628 10:23:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:25.628 10:23:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:25.628 10:23:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:25.628 10:23:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:25.628 [2024-07-15 10:23:20.233441] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:25.628 [2024-07-15 10:23:20.233488] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255784 ] 00:10:25.628 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.628 [2024-07-15 10:23:20.267275] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:25.628 [2024-07-15 10:23:20.276245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:25.628 [2024-07-15 10:23:20.276275] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1d49d31000 00:10:25.628 [2024-07-15 10:23:20.277246] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:25.888 [2024-07-15 10:23:20.278244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:25.888 [2024-07-15 10:23:20.279249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:25.888 [2024-07-15 10:23:20.280253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:25.888 [2024-07-15 10:23:20.281270] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:25.888 [2024-07-15 10:23:20.282287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:25.888 [2024-07-15 10:23:20.283298] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:25.888 [2024-07-15 10:23:20.284303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:25.888 [2024-07-15 10:23:20.285312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:25.888 [2024-07-15 10:23:20.285333] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1d49d26000 00:10:25.888 [2024-07-15 10:23:20.286489] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:25.888 [2024-07-15 10:23:20.301553] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:25.888 [2024-07-15 10:23:20.301592] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:25.888 [2024-07-15 10:23:20.306690] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:25.888 [2024-07-15 10:23:20.306751] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:25.888 [2024-07-15 10:23:20.306851] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:25.888 [2024-07-15 10:23:20.306911] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:25.888 [2024-07-15 10:23:20.306924] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:25.888 [2024-07-15 10:23:20.307693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:25.888 [2024-07-15 10:23:20.307717] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:25.888 [2024-07-15 10:23:20.307730] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:25.888 [2024-07-15 10:23:20.308697] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:25.888 [2024-07-15 10:23:20.308717] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:25.888 [2024-07-15 10:23:20.308731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:25.888 [2024-07-15 10:23:20.309710] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:25.888 [2024-07-15 10:23:20.309730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:25.888 [2024-07-15 10:23:20.310714] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:25.888 [2024-07-15 10:23:20.310736] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:25.888 [2024-07-15 10:23:20.310745] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:25.888 [2024-07-15 10:23:20.310757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:25.888 [2024-07-15 10:23:20.310866] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:25.888 [2024-07-15 10:23:20.310874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:25.888 [2024-07-15 10:23:20.310913] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:25.888 [2024-07-15 10:23:20.311723] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:25.888 [2024-07-15 10:23:20.312728] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:25.888 [2024-07-15 10:23:20.313734] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:25.888 [2024-07-15 10:23:20.314740] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:25.888 [2024-07-15 10:23:20.314833] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:25.888 [2024-07-15 10:23:20.315740] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:25.888 [2024-07-15 10:23:20.315760] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:25.888 [2024-07-15 10:23:20.315769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:25.888 [2024-07-15 10:23:20.315793] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:25.888 [2024-07-15 10:23:20.315807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:25.888 [2024-07-15 10:23:20.315835] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:25.888 [2024-07-15 10:23:20.315845] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:25.888 [2024-07-15 10:23:20.315887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:25.888 [2024-07-15 10:23:20.323895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:25.888 [2024-07-15 10:23:20.323936] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:25.888 [2024-07-15 10:23:20.323951] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:25.888 [2024-07-15 10:23:20.323960] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:25.888 [2024-07-15 10:23:20.323968] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:25.888 [2024-07-15 10:23:20.323976] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:25.888 [2024-07-15 10:23:20.323985] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:25.888 [2024-07-15 10:23:20.323993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:25.888 [2024-07-15 10:23:20.324007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:25.888 [2024-07-15 10:23:20.324024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:25.888 [2024-07-15 10:23:20.331890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:25.888 [2024-07-15 10:23:20.331938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.888 [2024-07-15 10:23:20.331959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.888 [2024-07-15 10:23:20.331972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.888 [2024-07-15 10:23:20.331985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.888 [2024-07-15 10:23:20.331994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:25.888 [2024-07-15 10:23:20.332010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:25.888 [2024-07-15 10:23:20.332026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:25.888 [2024-07-15 10:23:20.339903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:25.888 [2024-07-15 10:23:20.339933] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:25.888 [2024-07-15 10:23:20.339942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.339954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.339965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.339979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.347901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.347999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.348017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.348032] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:25.889 [2024-07-15 10:23:20.348041] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:25.889 [2024-07-15 10:23:20.348051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.355899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.355927] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:25.889 [2024-07-15 10:23:20.355951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.355968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.355981] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:25.889 [2024-07-15 10:23:20.355990] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:25.889 [2024-07-15 10:23:20.356000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.363891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.363923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.363940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.363954] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:25.889 [2024-07-15 10:23:20.363962] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:25.889 [2024-07-15 10:23:20.363972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.371887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.371910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.371923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.371939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.371950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.371959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.371968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.371977] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:25.889 [2024-07-15 10:23:20.371985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:25.889 [2024-07-15 10:23:20.371993] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:25.889 [2024-07-15 10:23:20.372024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.379903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.379929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.387898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.387925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.395904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.395930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.403889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.403951] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:25.889 [2024-07-15 10:23:20.403968] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:25.889 [2024-07-15 10:23:20.403977] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:25.889 [2024-07-15 10:23:20.403983] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:25.889 [2024-07-15 10:23:20.403993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:25.889 [2024-07-15 10:23:20.404006] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:25.889 [2024-07-15 10:23:20.404016] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:25.889 [2024-07-15 10:23:20.404025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.404037] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:25.889 [2024-07-15 10:23:20.404046] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:25.889 [2024-07-15 10:23:20.404056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.404068] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:25.889 [2024-07-15 10:23:20.404077] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:25.889 [2024-07-15 10:23:20.404086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:25.889 [2024-07-15 10:23:20.411887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.411916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.411950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:25.889 [2024-07-15 10:23:20.411963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:25.889 ===================================================== 00:10:25.889 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:25.889 ===================================================== 00:10:25.889 Controller Capabilities/Features 00:10:25.889 ================================ 00:10:25.889 Vendor ID: 4e58 00:10:25.889 Subsystem Vendor ID: 4e58 00:10:25.889 Serial Number: SPDK2 00:10:25.889 Model Number: SPDK bdev Controller 00:10:25.889 Firmware Version: 24.09 00:10:25.889 Recommended Arb Burst: 6 00:10:25.889 IEEE OUI Identifier: 8d 6b 50 00:10:25.889 Multi-path I/O 00:10:25.889 May have multiple subsystem ports: Yes 00:10:25.889 May have multiple controllers: Yes 00:10:25.889 Associated with SR-IOV VF: No 00:10:25.889 Max Data Transfer Size: 131072 00:10:25.889 Max Number of Namespaces: 32 00:10:25.889 Max Number of I/O Queues: 127 00:10:25.889 NVMe Specification Version (VS): 1.3 00:10:25.889 NVMe Specification Version (Identify): 1.3 00:10:25.889 Maximum Queue Entries: 256 00:10:25.889 Contiguous Queues Required: Yes 00:10:25.889 Arbitration Mechanisms Supported 00:10:25.889 Weighted Round Robin: Not Supported 00:10:25.889 Vendor Specific: Not Supported 00:10:25.889 Reset Timeout: 15000 ms 00:10:25.889 Doorbell Stride: 4 bytes 00:10:25.889 NVM Subsystem Reset: Not Supported 00:10:25.889 Command Sets Supported 00:10:25.889 NVM Command Set: Supported 00:10:25.889 Boot Partition: Not Supported 00:10:25.889 Memory Page Size Minimum: 4096 bytes 00:10:25.889 Memory Page Size Maximum: 4096 bytes 00:10:25.889 Persistent Memory Region: Not Supported 00:10:25.889 Optional Asynchronous Events Supported 00:10:25.889 Namespace Attribute Notices: Supported 00:10:25.889 Firmware Activation Notices: Not Supported 00:10:25.889 ANA Change Notices: Not Supported 00:10:25.889 PLE Aggregate Log Change Notices: Not Supported 00:10:25.889 LBA Status Info Alert Notices: Not Supported 00:10:25.889 EGE Aggregate Log Change Notices: Not Supported 00:10:25.889 Normal NVM Subsystem Shutdown event: Not Supported 00:10:25.889 Zone Descriptor Change Notices: Not Supported 00:10:25.889 Discovery Log Change Notices: Not Supported 00:10:25.889 Controller Attributes 00:10:25.889 128-bit Host Identifier: Supported 00:10:25.889 Non-Operational Permissive Mode: Not Supported 00:10:25.889 NVM Sets: Not Supported 00:10:25.889 Read Recovery Levels: Not Supported 00:10:25.889 Endurance Groups: Not Supported 00:10:25.889 Predictable Latency Mode: Not Supported 00:10:25.889 Traffic Based Keep ALive: Not Supported 00:10:25.889 Namespace Granularity: Not Supported 00:10:25.889 SQ Associations: Not Supported 00:10:25.889 UUID List: Not Supported 00:10:25.889 Multi-Domain Subsystem: Not Supported 00:10:25.889 Fixed Capacity Management: Not Supported 00:10:25.889 Variable Capacity Management: Not Supported 00:10:25.889 Delete Endurance Group: Not Supported 00:10:25.889 Delete NVM Set: Not Supported 00:10:25.890 Extended LBA Formats Supported: Not Supported 00:10:25.890 Flexible Data Placement Supported: Not Supported 00:10:25.890 00:10:25.890 Controller Memory Buffer Support 00:10:25.890 ================================ 00:10:25.890 Supported: No 00:10:25.890 00:10:25.890 Persistent Memory Region Support 00:10:25.890 ================================ 00:10:25.890 Supported: No 00:10:25.890 00:10:25.890 Admin Command Set Attributes 00:10:25.890 ============================ 00:10:25.890 Security Send/Receive: Not Supported 00:10:25.890 Format NVM: Not Supported 00:10:25.890 Firmware Activate/Download: Not Supported 00:10:25.890 Namespace Management: Not Supported 00:10:25.890 Device Self-Test: Not Supported 00:10:25.890 Directives: Not Supported 00:10:25.890 NVMe-MI: Not Supported 00:10:25.890 Virtualization Management: Not Supported 00:10:25.890 Doorbell Buffer Config: Not Supported 00:10:25.890 Get LBA Status Capability: Not Supported 00:10:25.890 Command & Feature Lockdown Capability: Not Supported 00:10:25.890 Abort Command Limit: 4 00:10:25.890 Async Event Request Limit: 4 00:10:25.890 Number of Firmware Slots: N/A 00:10:25.890 Firmware Slot 1 Read-Only: N/A 00:10:25.890 Firmware Activation Without Reset: N/A 00:10:25.890 Multiple Update Detection Support: N/A 00:10:25.890 Firmware Update Granularity: No Information Provided 00:10:25.890 Per-Namespace SMART Log: No 00:10:25.890 Asymmetric Namespace Access Log Page: Not Supported 00:10:25.890 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:25.890 Command Effects Log Page: Supported 00:10:25.890 Get Log Page Extended Data: Supported 00:10:25.890 Telemetry Log Pages: Not Supported 00:10:25.890 Persistent Event Log Pages: Not Supported 00:10:25.890 Supported Log Pages Log Page: May Support 00:10:25.890 Commands Supported & Effects Log Page: Not Supported 00:10:25.890 Feature Identifiers & Effects Log Page:May Support 00:10:25.890 NVMe-MI Commands & Effects Log Page: May Support 00:10:25.890 Data Area 4 for Telemetry Log: Not Supported 00:10:25.890 Error Log Page Entries Supported: 128 00:10:25.890 Keep Alive: Supported 00:10:25.890 Keep Alive Granularity: 10000 ms 00:10:25.890 00:10:25.890 NVM Command Set Attributes 00:10:25.890 ========================== 00:10:25.890 Submission Queue Entry Size 00:10:25.890 Max: 64 00:10:25.890 Min: 64 00:10:25.890 Completion Queue Entry Size 00:10:25.890 Max: 16 00:10:25.890 Min: 16 00:10:25.890 Number of Namespaces: 32 00:10:25.890 Compare Command: Supported 00:10:25.890 Write Uncorrectable Command: Not Supported 00:10:25.890 Dataset Management Command: Supported 00:10:25.890 Write Zeroes Command: Supported 00:10:25.890 Set Features Save Field: Not Supported 00:10:25.890 Reservations: Not Supported 00:10:25.890 Timestamp: Not Supported 00:10:25.890 Copy: Supported 00:10:25.890 Volatile Write Cache: Present 00:10:25.890 Atomic Write Unit (Normal): 1 00:10:25.890 Atomic Write Unit (PFail): 1 00:10:25.890 Atomic Compare & Write Unit: 1 00:10:25.890 Fused Compare & Write: Supported 00:10:25.890 Scatter-Gather List 00:10:25.890 SGL Command Set: Supported (Dword aligned) 00:10:25.890 SGL Keyed: Not Supported 00:10:25.890 SGL Bit Bucket Descriptor: Not Supported 00:10:25.890 SGL Metadata Pointer: Not Supported 00:10:25.890 Oversized SGL: Not Supported 00:10:25.890 SGL Metadata Address: Not Supported 00:10:25.890 SGL Offset: Not Supported 00:10:25.890 Transport SGL Data Block: Not Supported 00:10:25.890 Replay Protected Memory Block: Not Supported 00:10:25.890 00:10:25.890 Firmware Slot Information 00:10:25.890 ========================= 00:10:25.890 Active slot: 1 00:10:25.890 Slot 1 Firmware Revision: 24.09 00:10:25.890 00:10:25.890 00:10:25.890 Commands Supported and Effects 00:10:25.890 ============================== 00:10:25.890 Admin Commands 00:10:25.890 -------------- 00:10:25.890 Get Log Page (02h): Supported 00:10:25.890 Identify (06h): Supported 00:10:25.890 Abort (08h): Supported 00:10:25.890 Set Features (09h): Supported 00:10:25.890 Get Features (0Ah): Supported 00:10:25.890 Asynchronous Event Request (0Ch): Supported 00:10:25.890 Keep Alive (18h): Supported 00:10:25.890 I/O Commands 00:10:25.890 ------------ 00:10:25.890 Flush (00h): Supported LBA-Change 00:10:25.890 Write (01h): Supported LBA-Change 00:10:25.890 Read (02h): Supported 00:10:25.890 Compare (05h): Supported 00:10:25.890 Write Zeroes (08h): Supported LBA-Change 00:10:25.890 Dataset Management (09h): Supported LBA-Change 00:10:25.890 Copy (19h): Supported LBA-Change 00:10:25.890 00:10:25.890 Error Log 00:10:25.890 ========= 00:10:25.890 00:10:25.890 Arbitration 00:10:25.890 =========== 00:10:25.890 Arbitration Burst: 1 00:10:25.890 00:10:25.890 Power Management 00:10:25.890 ================ 00:10:25.890 Number of Power States: 1 00:10:25.890 Current Power State: Power State #0 00:10:25.890 Power State #0: 00:10:25.890 Max Power: 0.00 W 00:10:25.890 Non-Operational State: Operational 00:10:25.890 Entry Latency: Not Reported 00:10:25.890 Exit Latency: Not Reported 00:10:25.890 Relative Read Throughput: 0 00:10:25.890 Relative Read Latency: 0 00:10:25.890 Relative Write Throughput: 0 00:10:25.890 Relative Write Latency: 0 00:10:25.890 Idle Power: Not Reported 00:10:25.890 Active Power: Not Reported 00:10:25.890 Non-Operational Permissive Mode: Not Supported 00:10:25.890 00:10:25.890 Health Information 00:10:25.890 ================== 00:10:25.890 Critical Warnings: 00:10:25.890 Available Spare Space: OK 00:10:25.890 Temperature: OK 00:10:25.890 Device Reliability: OK 00:10:25.890 Read Only: No 00:10:25.890 Volatile Memory Backup: OK 00:10:25.890 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:25.890 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:25.890 Available Spare: 0% 00:10:25.890 Available Sp[2024-07-15 10:23:20.412084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:25.890 [2024-07-15 10:23:20.419888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:25.890 [2024-07-15 10:23:20.419957] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:25.890 [2024-07-15 10:23:20.419977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.890 [2024-07-15 10:23:20.419989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.890 [2024-07-15 10:23:20.420000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.890 [2024-07-15 10:23:20.420010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.890 [2024-07-15 10:23:20.420102] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:25.890 [2024-07-15 10:23:20.420125] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:25.890 [2024-07-15 10:23:20.421110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:25.890 [2024-07-15 10:23:20.421208] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:25.890 [2024-07-15 10:23:20.421237] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:25.890 [2024-07-15 10:23:20.422114] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:25.890 [2024-07-15 10:23:20.422140] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:25.891 [2024-07-15 10:23:20.422219] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:25.891 [2024-07-15 10:23:20.423396] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:25.891 are Threshold: 0% 00:10:25.891 Life Percentage Used: 0% 00:10:25.891 Data Units Read: 0 00:10:25.891 Data Units Written: 0 00:10:25.891 Host Read Commands: 0 00:10:25.891 Host Write Commands: 0 00:10:25.891 Controller Busy Time: 0 minutes 00:10:25.891 Power Cycles: 0 00:10:25.891 Power On Hours: 0 hours 00:10:25.891 Unsafe Shutdowns: 0 00:10:25.891 Unrecoverable Media Errors: 0 00:10:25.891 Lifetime Error Log Entries: 0 00:10:25.891 Warning Temperature Time: 0 minutes 00:10:25.891 Critical Temperature Time: 0 minutes 00:10:25.891 00:10:25.891 Number of Queues 00:10:25.891 ================ 00:10:25.891 Number of I/O Submission Queues: 127 00:10:25.891 Number of I/O Completion Queues: 127 00:10:25.891 00:10:25.891 Active Namespaces 00:10:25.891 ================= 00:10:25.891 Namespace ID:1 00:10:25.891 Error Recovery Timeout: Unlimited 00:10:25.891 Command Set Identifier: NVM (00h) 00:10:25.891 Deallocate: Supported 00:10:25.891 Deallocated/Unwritten Error: Not Supported 00:10:25.891 Deallocated Read Value: Unknown 00:10:25.891 Deallocate in Write Zeroes: Not Supported 00:10:25.891 Deallocated Guard Field: 0xFFFF 00:10:25.891 Flush: Supported 00:10:25.891 Reservation: Supported 00:10:25.891 Namespace Sharing Capabilities: Multiple Controllers 00:10:25.891 Size (in LBAs): 131072 (0GiB) 00:10:25.891 Capacity (in LBAs): 131072 (0GiB) 00:10:25.891 Utilization (in LBAs): 131072 (0GiB) 00:10:25.891 NGUID: 6886E9D3C0784261B4E0C3BB4AAAB4BE 00:10:25.891 UUID: 6886e9d3-c078-4261-b4e0-c3bb4aaab4be 00:10:25.891 Thin Provisioning: Not Supported 00:10:25.891 Per-NS Atomic Units: Yes 00:10:25.891 Atomic Boundary Size (Normal): 0 00:10:25.891 Atomic Boundary Size (PFail): 0 00:10:25.891 Atomic Boundary Offset: 0 00:10:25.891 Maximum Single Source Range Length: 65535 00:10:25.891 Maximum Copy Length: 65535 00:10:25.891 Maximum Source Range Count: 1 00:10:25.891 NGUID/EUI64 Never Reused: No 00:10:25.891 Namespace Write Protected: No 00:10:25.891 Number of LBA Formats: 1 00:10:25.891 Current LBA Format: LBA Format #00 00:10:25.891 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:25.891 00:10:25.891 10:23:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:25.891 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.148 [2024-07-15 10:23:20.662782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:31.440 Initializing NVMe Controllers 00:10:31.440 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.440 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:31.440 Initialization complete. Launching workers. 00:10:31.440 ======================================================== 00:10:31.440 Latency(us) 00:10:31.440 Device Information : IOPS MiB/s Average min max 00:10:31.440 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34415.92 134.44 3718.54 1161.29 7588.30 00:10:31.440 ======================================================== 00:10:31.440 Total : 34415.92 134.44 3718.54 1161.29 7588.30 00:10:31.440 00:10:31.440 [2024-07-15 10:23:25.772288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:31.440 10:23:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:31.440 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.440 [2024-07-15 10:23:26.002898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:36.701 Initializing NVMe Controllers 00:10:36.701 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:36.701 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:36.701 Initialization complete. Launching workers. 00:10:36.701 ======================================================== 00:10:36.701 Latency(us) 00:10:36.701 Device Information : IOPS MiB/s Average min max 00:10:36.701 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31958.00 124.84 4005.89 1195.69 9764.78 00:10:36.701 ======================================================== 00:10:36.701 Total : 31958.00 124.84 4005.89 1195.69 9764.78 00:10:36.701 00:10:36.701 [2024-07-15 10:23:31.024360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:36.701 10:23:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:36.701 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.701 [2024-07-15 10:23:31.234181] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:41.961 [2024-07-15 10:23:36.385048] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:41.961 Initializing NVMe Controllers 00:10:41.961 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:41.961 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:41.961 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:41.961 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:41.961 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:41.961 Initialization complete. Launching workers. 00:10:41.961 Starting thread on core 2 00:10:41.961 Starting thread on core 3 00:10:41.961 Starting thread on core 1 00:10:41.961 10:23:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:41.961 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.219 [2024-07-15 10:23:36.690336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:45.530 [2024-07-15 10:23:40.067168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:45.530 Initializing NVMe Controllers 00:10:45.530 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:45.530 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:45.530 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:45.530 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:45.530 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:45.530 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:45.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:45.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:45.530 Initialization complete. Launching workers. 00:10:45.530 Starting thread on core 1 with urgent priority queue 00:10:45.530 Starting thread on core 2 with urgent priority queue 00:10:45.530 Starting thread on core 3 with urgent priority queue 00:10:45.530 Starting thread on core 0 with urgent priority queue 00:10:45.530 SPDK bdev Controller (SPDK2 ) core 0: 1501.33 IO/s 66.61 secs/100000 ios 00:10:45.530 SPDK bdev Controller (SPDK2 ) core 1: 1602.33 IO/s 62.41 secs/100000 ios 00:10:45.530 SPDK bdev Controller (SPDK2 ) core 2: 1485.33 IO/s 67.32 secs/100000 ios 00:10:45.530 SPDK bdev Controller (SPDK2 ) core 3: 1557.33 IO/s 64.21 secs/100000 ios 00:10:45.530 ======================================================== 00:10:45.530 00:10:45.530 10:23:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:45.530 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.788 [2024-07-15 10:23:40.368357] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:45.788 Initializing NVMe Controllers 00:10:45.788 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:45.788 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:45.788 Namespace ID: 1 size: 0GB 00:10:45.788 Initialization complete. 00:10:45.788 INFO: using host memory buffer for IO 00:10:45.788 Hello world! 00:10:45.788 [2024-07-15 10:23:40.378407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:45.788 10:23:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:46.046 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.046 [2024-07-15 10:23:40.675248] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:47.419 Initializing NVMe Controllers 00:10:47.419 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:47.419 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:47.419 Initialization complete. Launching workers. 00:10:47.419 submit (in ns) avg, min, max = 6027.8, 3505.6, 4027378.9 00:10:47.419 complete (in ns) avg, min, max = 27070.8, 2057.8, 6993486.7 00:10:47.419 00:10:47.419 Submit histogram 00:10:47.419 ================ 00:10:47.419 Range in us Cumulative Count 00:10:47.419 3.484 - 3.508: 0.0220% ( 3) 00:10:47.419 3.508 - 3.532: 0.4404% ( 57) 00:10:47.419 3.532 - 3.556: 1.6443% ( 164) 00:10:47.419 3.556 - 3.579: 4.1694% ( 344) 00:10:47.419 3.579 - 3.603: 8.8013% ( 631) 00:10:47.419 3.603 - 3.627: 15.5252% ( 916) 00:10:47.419 3.627 - 3.650: 24.0255% ( 1158) 00:10:47.419 3.650 - 3.674: 34.5078% ( 1428) 00:10:47.419 3.674 - 3.698: 42.7879% ( 1128) 00:10:47.419 3.698 - 3.721: 50.7818% ( 1089) 00:10:47.419 3.721 - 3.745: 56.5000% ( 779) 00:10:47.419 3.745 - 3.769: 61.7632% ( 717) 00:10:47.419 3.769 - 3.793: 66.1161% ( 593) 00:10:47.419 3.793 - 3.816: 69.9772% ( 526) 00:10:47.419 3.816 - 3.840: 73.0015% ( 412) 00:10:47.419 3.840 - 3.864: 76.4369% ( 468) 00:10:47.419 3.864 - 3.887: 79.6007% ( 431) 00:10:47.419 3.887 - 3.911: 82.8599% ( 444) 00:10:47.419 3.911 - 3.935: 85.5538% ( 367) 00:10:47.419 3.935 - 3.959: 87.4991% ( 265) 00:10:47.419 3.959 - 3.982: 89.4076% ( 260) 00:10:47.419 3.982 - 4.006: 91.0372% ( 222) 00:10:47.419 4.006 - 4.030: 92.2924% ( 171) 00:10:47.419 4.030 - 4.053: 93.2100% ( 125) 00:10:47.419 4.053 - 4.077: 94.0468% ( 114) 00:10:47.419 4.077 - 4.101: 94.7882% ( 101) 00:10:47.419 4.101 - 4.124: 95.3388% ( 75) 00:10:47.419 4.124 - 4.148: 95.8012% ( 63) 00:10:47.419 4.148 - 4.172: 96.0802% ( 38) 00:10:47.419 4.172 - 4.196: 96.2710% ( 26) 00:10:47.419 4.196 - 4.219: 96.4105% ( 19) 00:10:47.419 4.219 - 4.243: 96.4839% ( 10) 00:10:47.419 4.243 - 4.267: 96.5500% ( 9) 00:10:47.419 4.267 - 4.290: 96.6527% ( 14) 00:10:47.419 4.290 - 4.314: 96.7335% ( 11) 00:10:47.419 4.314 - 4.338: 96.7775% ( 6) 00:10:47.419 4.338 - 4.361: 96.8436% ( 9) 00:10:47.419 4.361 - 4.385: 96.8729% ( 4) 00:10:47.419 4.385 - 4.409: 96.9610% ( 12) 00:10:47.419 4.409 - 4.433: 96.9830% ( 3) 00:10:47.419 4.433 - 4.456: 96.9977% ( 2) 00:10:47.419 4.456 - 4.480: 97.0197% ( 3) 00:10:47.419 4.504 - 4.527: 97.0418% ( 3) 00:10:47.419 4.527 - 4.551: 97.0564% ( 2) 00:10:47.419 4.575 - 4.599: 97.0638% ( 1) 00:10:47.419 4.622 - 4.646: 97.0711% ( 1) 00:10:47.419 4.646 - 4.670: 97.0932% ( 3) 00:10:47.419 4.670 - 4.693: 97.1005% ( 1) 00:10:47.419 4.693 - 4.717: 97.1152% ( 2) 00:10:47.419 4.741 - 4.764: 97.1225% ( 1) 00:10:47.419 4.764 - 4.788: 97.1445% ( 3) 00:10:47.419 4.788 - 4.812: 97.1739% ( 4) 00:10:47.419 4.812 - 4.836: 97.2179% ( 6) 00:10:47.419 4.836 - 4.859: 97.2693% ( 7) 00:10:47.419 4.859 - 4.883: 97.3060% ( 5) 00:10:47.419 4.883 - 4.907: 97.3648% ( 8) 00:10:47.419 4.907 - 4.930: 97.4015% ( 5) 00:10:47.419 4.930 - 4.954: 97.4382% ( 5) 00:10:47.419 4.954 - 4.978: 97.5262% ( 12) 00:10:47.419 4.978 - 5.001: 97.5923% ( 9) 00:10:47.419 5.001 - 5.025: 97.6437% ( 7) 00:10:47.419 5.025 - 5.049: 97.6657% ( 3) 00:10:47.419 5.049 - 5.073: 97.6804% ( 2) 00:10:47.419 5.073 - 5.096: 97.7024% ( 3) 00:10:47.419 5.096 - 5.120: 97.7244% ( 3) 00:10:47.419 5.120 - 5.144: 97.7538% ( 4) 00:10:47.419 5.144 - 5.167: 97.7978% ( 6) 00:10:47.419 5.167 - 5.191: 97.8199% ( 3) 00:10:47.419 5.191 - 5.215: 97.8272% ( 1) 00:10:47.419 5.239 - 5.262: 97.8566% ( 4) 00:10:47.419 5.262 - 5.286: 97.8639% ( 1) 00:10:47.419 5.286 - 5.310: 97.8712% ( 1) 00:10:47.419 5.310 - 5.333: 97.8859% ( 2) 00:10:47.419 5.333 - 5.357: 97.9079% ( 3) 00:10:47.419 5.357 - 5.381: 97.9153% ( 1) 00:10:47.419 5.381 - 5.404: 97.9226% ( 1) 00:10:47.419 5.404 - 5.428: 97.9373% ( 2) 00:10:47.419 5.570 - 5.594: 97.9447% ( 1) 00:10:47.419 5.618 - 5.641: 97.9520% ( 1) 00:10:47.419 5.641 - 5.665: 97.9667% ( 2) 00:10:47.419 5.665 - 5.689: 97.9740% ( 1) 00:10:47.419 5.713 - 5.736: 97.9814% ( 1) 00:10:47.419 5.736 - 5.760: 97.9887% ( 1) 00:10:47.419 5.973 - 5.997: 98.0034% ( 2) 00:10:47.419 6.021 - 6.044: 98.0107% ( 1) 00:10:47.419 6.068 - 6.116: 98.0181% ( 1) 00:10:47.419 6.163 - 6.210: 98.0254% ( 1) 00:10:47.419 6.210 - 6.258: 98.0401% ( 2) 00:10:47.419 6.447 - 6.495: 98.0474% ( 1) 00:10:47.419 6.542 - 6.590: 98.0548% ( 1) 00:10:47.419 6.637 - 6.684: 98.0621% ( 1) 00:10:47.419 6.732 - 6.779: 98.0768% ( 2) 00:10:47.419 6.921 - 6.969: 98.0841% ( 1) 00:10:47.419 7.016 - 7.064: 98.0915% ( 1) 00:10:47.419 7.111 - 7.159: 98.0988% ( 1) 00:10:47.419 7.206 - 7.253: 98.1061% ( 1) 00:10:47.419 7.348 - 7.396: 98.1208% ( 2) 00:10:47.419 7.396 - 7.443: 98.1282% ( 1) 00:10:47.419 7.443 - 7.490: 98.1355% ( 1) 00:10:47.419 7.490 - 7.538: 98.1502% ( 2) 00:10:47.419 7.538 - 7.585: 98.1649% ( 2) 00:10:47.419 7.585 - 7.633: 98.1795% ( 2) 00:10:47.419 7.633 - 7.680: 98.1942% ( 2) 00:10:47.419 7.680 - 7.727: 98.2016% ( 1) 00:10:47.419 7.727 - 7.775: 98.2163% ( 2) 00:10:47.419 7.775 - 7.822: 98.2309% ( 2) 00:10:47.419 7.822 - 7.870: 98.2603% ( 4) 00:10:47.419 7.870 - 7.917: 98.2750% ( 2) 00:10:47.419 7.917 - 7.964: 98.2897% ( 2) 00:10:47.419 7.964 - 8.012: 98.2970% ( 1) 00:10:47.419 8.012 - 8.059: 98.3043% ( 1) 00:10:47.419 8.107 - 8.154: 98.3190% ( 2) 00:10:47.419 8.154 - 8.201: 98.3264% ( 1) 00:10:47.419 8.249 - 8.296: 98.3337% ( 1) 00:10:47.419 8.296 - 8.344: 98.3410% ( 1) 00:10:47.419 8.344 - 8.391: 98.3484% ( 1) 00:10:47.419 8.391 - 8.439: 98.3631% ( 2) 00:10:47.419 8.486 - 8.533: 98.3704% ( 1) 00:10:47.419 8.533 - 8.581: 98.3777% ( 1) 00:10:47.419 8.581 - 8.628: 98.3851% ( 1) 00:10:47.419 8.628 - 8.676: 98.3998% ( 2) 00:10:47.419 8.676 - 8.723: 98.4071% ( 1) 00:10:47.419 8.723 - 8.770: 98.4365% ( 4) 00:10:47.419 8.818 - 8.865: 98.4438% ( 1) 00:10:47.419 8.865 - 8.913: 98.4511% ( 1) 00:10:47.419 8.913 - 8.960: 98.4585% ( 1) 00:10:47.419 9.007 - 9.055: 98.4805% ( 3) 00:10:47.419 9.055 - 9.102: 98.4879% ( 1) 00:10:47.419 9.244 - 9.292: 98.5025% ( 2) 00:10:47.419 9.292 - 9.339: 98.5172% ( 2) 00:10:47.419 9.434 - 9.481: 98.5246% ( 1) 00:10:47.419 9.481 - 9.529: 98.5319% ( 1) 00:10:47.419 9.529 - 9.576: 98.5392% ( 1) 00:10:47.419 9.576 - 9.624: 98.5466% ( 1) 00:10:47.419 9.624 - 9.671: 98.5539% ( 1) 00:10:47.419 9.719 - 9.766: 98.5613% ( 1) 00:10:47.419 9.766 - 9.813: 98.5686% ( 1) 00:10:47.419 9.956 - 10.003: 98.5759% ( 1) 00:10:47.419 10.003 - 10.050: 98.5906% ( 2) 00:10:47.419 10.050 - 10.098: 98.5980% ( 1) 00:10:47.419 10.145 - 10.193: 98.6053% ( 1) 00:10:47.419 10.287 - 10.335: 98.6126% ( 1) 00:10:47.419 10.430 - 10.477: 98.6273% ( 2) 00:10:47.419 10.572 - 10.619: 98.6420% ( 2) 00:10:47.419 10.619 - 10.667: 98.6493% ( 1) 00:10:47.419 10.667 - 10.714: 98.6640% ( 2) 00:10:47.419 10.714 - 10.761: 98.6714% ( 1) 00:10:47.419 10.999 - 11.046: 98.6787% ( 1) 00:10:47.419 11.046 - 11.093: 98.6934% ( 2) 00:10:47.419 11.093 - 11.141: 98.7007% ( 1) 00:10:47.419 11.283 - 11.330: 98.7154% ( 2) 00:10:47.419 11.473 - 11.520: 98.7301% ( 2) 00:10:47.419 11.520 - 11.567: 98.7374% ( 1) 00:10:47.419 11.615 - 11.662: 98.7448% ( 1) 00:10:47.419 11.710 - 11.757: 98.7521% ( 1) 00:10:47.419 11.757 - 11.804: 98.7595% ( 1) 00:10:47.419 11.899 - 11.947: 98.7668% ( 1) 00:10:47.419 11.994 - 12.041: 98.7815% ( 2) 00:10:47.419 12.136 - 12.231: 98.7962% ( 2) 00:10:47.419 12.326 - 12.421: 98.8035% ( 1) 00:10:47.419 12.516 - 12.610: 98.8182% ( 2) 00:10:47.419 12.610 - 12.705: 98.8255% ( 1) 00:10:47.419 12.705 - 12.800: 98.8622% ( 5) 00:10:47.419 12.800 - 12.895: 98.8916% ( 4) 00:10:47.419 12.895 - 12.990: 98.8989% ( 1) 00:10:47.419 12.990 - 13.084: 98.9063% ( 1) 00:10:47.419 13.084 - 13.179: 98.9209% ( 2) 00:10:47.419 13.179 - 13.274: 98.9283% ( 1) 00:10:47.419 13.274 - 13.369: 98.9430% ( 2) 00:10:47.419 13.369 - 13.464: 98.9503% ( 1) 00:10:47.419 13.464 - 13.559: 98.9650% ( 2) 00:10:47.419 13.559 - 13.653: 98.9723% ( 1) 00:10:47.420 13.843 - 13.938: 98.9870% ( 2) 00:10:47.420 14.033 - 14.127: 98.9943% ( 1) 00:10:47.420 14.127 - 14.222: 99.0017% ( 1) 00:10:47.420 14.222 - 14.317: 99.0164% ( 2) 00:10:47.420 14.317 - 14.412: 99.0237% ( 1) 00:10:47.420 14.696 - 14.791: 99.0311% ( 1) 00:10:47.420 14.791 - 14.886: 99.0457% ( 2) 00:10:47.420 14.886 - 14.981: 99.0531% ( 1) 00:10:47.420 15.076 - 15.170: 99.0604% ( 1) 00:10:47.420 15.265 - 15.360: 99.0678% ( 1) 00:10:47.420 15.644 - 15.739: 99.0751% ( 1) 00:10:47.420 16.024 - 16.119: 99.0824% ( 1) 00:10:47.420 16.687 - 16.782: 99.0898% ( 1) 00:10:47.420 17.161 - 17.256: 99.0971% ( 1) 00:10:47.420 17.256 - 17.351: 99.1191% ( 3) 00:10:47.420 17.351 - 17.446: 99.1558% ( 5) 00:10:47.420 17.446 - 17.541: 99.1705% ( 2) 00:10:47.420 17.541 - 17.636: 99.2146% ( 6) 00:10:47.420 17.636 - 17.730: 99.2953% ( 11) 00:10:47.420 17.730 - 17.825: 99.3761% ( 11) 00:10:47.420 17.825 - 17.920: 99.4201% ( 6) 00:10:47.420 17.920 - 18.015: 99.4715% ( 7) 00:10:47.420 18.110 - 18.204: 99.5302% ( 8) 00:10:47.420 18.204 - 18.299: 99.5596% ( 4) 00:10:47.420 18.299 - 18.394: 99.6110% ( 7) 00:10:47.420 18.394 - 18.489: 99.6844% ( 10) 00:10:47.420 18.489 - 18.584: 99.7431% ( 8) 00:10:47.420 18.679 - 18.773: 99.7724% ( 4) 00:10:47.420 18.773 - 18.868: 99.7945% ( 3) 00:10:47.420 18.868 - 18.963: 99.8091% ( 2) 00:10:47.420 19.153 - 19.247: 99.8238% ( 2) 00:10:47.420 19.437 - 19.532: 99.8312% ( 1) 00:10:47.420 19.627 - 19.721: 99.8385% ( 1) 00:10:47.420 20.101 - 20.196: 99.8458% ( 1) 00:10:47.420 20.385 - 20.480: 99.8532% ( 1) 00:10:47.420 21.807 - 21.902: 99.8605% ( 1) 00:10:47.420 21.997 - 22.092: 99.8679% ( 1) 00:10:47.420 22.376 - 22.471: 99.8752% ( 1) 00:10:47.420 22.850 - 22.945: 99.8826% ( 1) 00:10:47.420 24.462 - 24.652: 99.8899% ( 1) 00:10:47.420 26.359 - 26.548: 99.8972% ( 1) 00:10:47.420 26.738 - 26.927: 99.9046% ( 1) 00:10:47.420 26.927 - 27.117: 99.9119% ( 1) 00:10:47.420 27.117 - 27.307: 99.9193% ( 1) 00:10:47.420 27.496 - 27.686: 99.9266% ( 1) 00:10:47.420 28.824 - 29.013: 99.9339% ( 1) 00:10:47.420 29.013 - 29.203: 99.9413% ( 1) 00:10:47.420 33.944 - 34.133: 99.9486% ( 1) 00:10:47.420 3980.705 - 4004.978: 99.9633% ( 2) 00:10:47.420 4004.978 - 4029.250: 100.0000% ( 5) 00:10:47.420 00:10:47.420 Complete histogram 00:10:47.420 ================== 00:10:47.420 Range in us Cumulative Count 00:10:47.420 2.050 - 2.062: 0.5872% ( 80) 00:10:47.420 2.062 - 2.074: 34.2069% ( 4580) 00:10:47.420 2.074 - 2.086: 48.3961% ( 1933) 00:10:47.420 2.086 - 2.098: 50.8919% ( 340) 00:10:47.420 2.098 - 2.110: 60.2070% ( 1269) 00:10:47.420 2.110 - 2.121: 63.5029% ( 449) 00:10:47.420 2.121 - 2.133: 67.4227% ( 534) 00:10:47.420 2.133 - 2.145: 79.9971% ( 1713) 00:10:47.420 2.145 - 2.157: 82.9113% ( 397) 00:10:47.420 2.157 - 2.169: 85.0767% ( 295) 00:10:47.420 2.169 - 2.181: 88.5635% ( 475) 00:10:47.420 2.181 - 2.193: 90.0316% ( 200) 00:10:47.420 2.193 - 2.204: 90.8757% ( 115) 00:10:47.420 2.204 - 2.216: 92.2411% ( 186) 00:10:47.420 2.216 - 2.228: 93.8853% ( 224) 00:10:47.420 2.228 - 2.240: 94.9130% ( 140) 00:10:47.420 2.240 - 2.252: 95.3975% ( 66) 00:10:47.420 2.252 - 2.264: 95.5590% ( 22) 00:10:47.420 2.264 - 2.276: 95.7278% ( 23) 00:10:47.420 2.276 - 2.287: 95.8306% ( 14) 00:10:47.420 2.287 - 2.299: 96.0508% ( 30) 00:10:47.420 2.299 - 2.311: 96.1682% ( 16) 00:10:47.420 2.311 - 2.323: 96.2710% ( 14) 00:10:47.420 2.323 - 2.335: 96.3224% ( 7) 00:10:47.420 2.335 - 2.347: 96.4031% ( 11) 00:10:47.420 2.347 - 2.359: 96.6013% ( 27) 00:10:47.420 2.359 - 2.370: 96.8656% ( 36) 00:10:47.420 2.370 - 2.382: 97.0564% ( 26) 00:10:47.420 2.382 - 2.394: 97.3280% ( 37) 00:10:47.420 2.394 - 2.406: 9[2024-07-15 10:23:41.772715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:47.420 7.6437% ( 43) 00:10:47.420 2.406 - 2.418: 97.8272% ( 25) 00:10:47.420 2.418 - 2.430: 97.9960% ( 23) 00:10:47.420 2.430 - 2.441: 98.1282% ( 18) 00:10:47.420 2.441 - 2.453: 98.2530% ( 17) 00:10:47.420 2.453 - 2.465: 98.3337% ( 11) 00:10:47.420 2.465 - 2.477: 98.4144% ( 11) 00:10:47.420 2.477 - 2.489: 98.4365% ( 3) 00:10:47.420 2.489 - 2.501: 98.4585% ( 3) 00:10:47.420 2.501 - 2.513: 98.4658% ( 1) 00:10:47.420 2.513 - 2.524: 98.4879% ( 3) 00:10:47.420 2.536 - 2.548: 98.5025% ( 2) 00:10:47.420 2.560 - 2.572: 98.5099% ( 1) 00:10:47.420 2.572 - 2.584: 98.5246% ( 2) 00:10:47.420 2.631 - 2.643: 98.5319% ( 1) 00:10:47.420 2.702 - 2.714: 98.5466% ( 2) 00:10:47.420 2.714 - 2.726: 98.5539% ( 1) 00:10:47.420 2.726 - 2.738: 98.5613% ( 1) 00:10:47.420 2.809 - 2.821: 98.5686% ( 1) 00:10:47.420 2.833 - 2.844: 98.5759% ( 1) 00:10:47.420 2.880 - 2.892: 98.5833% ( 1) 00:10:47.420 3.508 - 3.532: 98.5906% ( 1) 00:10:47.420 3.532 - 3.556: 98.5980% ( 1) 00:10:47.420 3.556 - 3.579: 98.6273% ( 4) 00:10:47.420 3.579 - 3.603: 98.6567% ( 4) 00:10:47.420 3.674 - 3.698: 98.6640% ( 1) 00:10:47.420 3.698 - 3.721: 98.6714% ( 1) 00:10:47.420 3.721 - 3.745: 98.6787% ( 1) 00:10:47.420 3.769 - 3.793: 98.7007% ( 3) 00:10:47.420 3.840 - 3.864: 98.7154% ( 2) 00:10:47.420 3.887 - 3.911: 98.7227% ( 1) 00:10:47.420 3.911 - 3.935: 98.7301% ( 1) 00:10:47.420 4.030 - 4.053: 98.7374% ( 1) 00:10:47.420 4.077 - 4.101: 98.7741% ( 5) 00:10:47.420 4.124 - 4.148: 98.7815% ( 1) 00:10:47.420 4.172 - 4.196: 98.7888% ( 1) 00:10:47.420 4.219 - 4.243: 98.7962% ( 1) 00:10:47.420 4.243 - 4.267: 98.8035% ( 1) 00:10:47.420 5.357 - 5.381: 98.8108% ( 1) 00:10:47.420 5.594 - 5.618: 98.8255% ( 2) 00:10:47.420 5.641 - 5.665: 98.8329% ( 1) 00:10:47.420 5.665 - 5.689: 98.8402% ( 1) 00:10:47.420 5.689 - 5.713: 98.8475% ( 1) 00:10:47.420 5.973 - 5.997: 98.8549% ( 1) 00:10:47.420 6.210 - 6.258: 98.8622% ( 1) 00:10:47.420 6.684 - 6.732: 98.8696% ( 1) 00:10:47.420 6.874 - 6.921: 98.8769% ( 1) 00:10:47.420 6.969 - 7.016: 98.8842% ( 1) 00:10:47.420 7.016 - 7.064: 98.8916% ( 1) 00:10:47.420 7.064 - 7.111: 98.8989% ( 1) 00:10:47.420 7.538 - 7.585: 98.9063% ( 1) 00:10:47.420 8.913 - 8.960: 98.9136% ( 1) 00:10:47.420 13.274 - 13.369: 98.9209% ( 1) 00:10:47.420 15.076 - 15.170: 98.9283% ( 1) 00:10:47.420 15.360 - 15.455: 98.9430% ( 2) 00:10:47.420 15.455 - 15.550: 98.9576% ( 2) 00:10:47.420 15.644 - 15.739: 98.9723% ( 2) 00:10:47.420 15.739 - 15.834: 98.9797% ( 1) 00:10:47.420 15.834 - 15.929: 99.0090% ( 4) 00:10:47.420 15.929 - 16.024: 99.0384% ( 4) 00:10:47.420 16.024 - 16.119: 99.0824% ( 6) 00:10:47.420 16.119 - 16.213: 99.0971% ( 2) 00:10:47.420 16.213 - 16.308: 99.1558% ( 8) 00:10:47.420 16.308 - 16.403: 99.1852% ( 4) 00:10:47.420 16.403 - 16.498: 99.2146% ( 4) 00:10:47.420 16.498 - 16.593: 99.2219% ( 1) 00:10:47.420 16.593 - 16.687: 99.2586% ( 5) 00:10:47.420 16.687 - 16.782: 99.2806% ( 3) 00:10:47.420 16.782 - 16.877: 99.3026% ( 3) 00:10:47.420 16.877 - 16.972: 99.3100% ( 1) 00:10:47.420 16.972 - 17.067: 99.3467% ( 5) 00:10:47.420 17.161 - 17.256: 99.3540% ( 1) 00:10:47.420 18.110 - 18.204: 99.3614% ( 1) 00:10:47.420 18.394 - 18.489: 99.3687% ( 1) 00:10:47.420 18.489 - 18.584: 99.3761% ( 1) 00:10:47.420 18.584 - 18.679: 99.3834% ( 1) 00:10:47.420 3980.705 - 4004.978: 99.8972% ( 70) 00:10:47.420 4004.978 - 4029.250: 99.9927% ( 13) 00:10:47.420 6990.507 - 7039.052: 100.0000% ( 1) 00:10:47.420 00:10:47.420 10:23:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:47.420 10:23:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:47.420 10:23:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:47.420 10:23:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:47.420 10:23:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:47.678 [ 00:10:47.678 { 00:10:47.678 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:47.678 "subtype": "Discovery", 00:10:47.678 "listen_addresses": [], 00:10:47.678 "allow_any_host": true, 00:10:47.678 "hosts": [] 00:10:47.678 }, 00:10:47.678 { 00:10:47.678 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:47.678 "subtype": "NVMe", 00:10:47.678 "listen_addresses": [ 00:10:47.678 { 00:10:47.678 "trtype": "VFIOUSER", 00:10:47.678 "adrfam": "IPv4", 00:10:47.678 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:47.678 "trsvcid": "0" 00:10:47.678 } 00:10:47.678 ], 00:10:47.678 "allow_any_host": true, 00:10:47.678 "hosts": [], 00:10:47.678 "serial_number": "SPDK1", 00:10:47.678 "model_number": "SPDK bdev Controller", 00:10:47.678 "max_namespaces": 32, 00:10:47.678 "min_cntlid": 1, 00:10:47.678 "max_cntlid": 65519, 00:10:47.678 "namespaces": [ 00:10:47.678 { 00:10:47.678 "nsid": 1, 00:10:47.678 "bdev_name": "Malloc1", 00:10:47.678 "name": "Malloc1", 00:10:47.678 "nguid": "1F1263B286AD47419ACD2F536D98E0D3", 00:10:47.678 "uuid": "1f1263b2-86ad-4741-9acd-2f536d98e0d3" 00:10:47.678 }, 00:10:47.678 { 00:10:47.678 "nsid": 2, 00:10:47.678 "bdev_name": "Malloc3", 00:10:47.678 "name": "Malloc3", 00:10:47.678 "nguid": "3E5820200F9240EC932CF7358C5FBE89", 00:10:47.678 "uuid": "3e582020-0f92-40ec-932c-f7358c5fbe89" 00:10:47.678 } 00:10:47.678 ] 00:10:47.678 }, 00:10:47.678 { 00:10:47.678 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:47.678 "subtype": "NVMe", 00:10:47.678 "listen_addresses": [ 00:10:47.678 { 00:10:47.678 "trtype": "VFIOUSER", 00:10:47.678 "adrfam": "IPv4", 00:10:47.678 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:47.678 "trsvcid": "0" 00:10:47.678 } 00:10:47.678 ], 00:10:47.678 "allow_any_host": true, 00:10:47.678 "hosts": [], 00:10:47.678 "serial_number": "SPDK2", 00:10:47.678 "model_number": "SPDK bdev Controller", 00:10:47.678 "max_namespaces": 32, 00:10:47.678 "min_cntlid": 1, 00:10:47.678 "max_cntlid": 65519, 00:10:47.678 "namespaces": [ 00:10:47.678 { 00:10:47.678 "nsid": 1, 00:10:47.678 "bdev_name": "Malloc2", 00:10:47.678 "name": "Malloc2", 00:10:47.678 "nguid": "6886E9D3C0784261B4E0C3BB4AAAB4BE", 00:10:47.678 "uuid": "6886e9d3-c078-4261-b4e0-c3bb4aaab4be" 00:10:47.678 } 00:10:47.678 ] 00:10:47.678 } 00:10:47.678 ] 00:10:47.678 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:47.678 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2258366 00:10:47.678 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:47.678 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:47.678 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:47.678 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:47.678 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:47.678 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:47.678 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:47.678 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:47.678 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.678 [2024-07-15 10:23:42.244343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:47.936 Malloc4 00:10:47.936 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:48.214 [2024-07-15 10:23:42.598866] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:48.214 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:48.214 Asynchronous Event Request test 00:10:48.214 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:48.214 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:48.214 Registering asynchronous event callbacks... 00:10:48.214 Starting namespace attribute notice tests for all controllers... 00:10:48.214 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:48.214 aer_cb - Changed Namespace 00:10:48.214 Cleaning up... 00:10:48.214 [ 00:10:48.214 { 00:10:48.214 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:48.214 "subtype": "Discovery", 00:10:48.214 "listen_addresses": [], 00:10:48.214 "allow_any_host": true, 00:10:48.214 "hosts": [] 00:10:48.214 }, 00:10:48.214 { 00:10:48.214 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:48.214 "subtype": "NVMe", 00:10:48.214 "listen_addresses": [ 00:10:48.214 { 00:10:48.214 "trtype": "VFIOUSER", 00:10:48.214 "adrfam": "IPv4", 00:10:48.214 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:48.214 "trsvcid": "0" 00:10:48.214 } 00:10:48.214 ], 00:10:48.214 "allow_any_host": true, 00:10:48.214 "hosts": [], 00:10:48.214 "serial_number": "SPDK1", 00:10:48.214 "model_number": "SPDK bdev Controller", 00:10:48.214 "max_namespaces": 32, 00:10:48.214 "min_cntlid": 1, 00:10:48.214 "max_cntlid": 65519, 00:10:48.214 "namespaces": [ 00:10:48.214 { 00:10:48.214 "nsid": 1, 00:10:48.214 "bdev_name": "Malloc1", 00:10:48.214 "name": "Malloc1", 00:10:48.214 "nguid": "1F1263B286AD47419ACD2F536D98E0D3", 00:10:48.214 "uuid": "1f1263b2-86ad-4741-9acd-2f536d98e0d3" 00:10:48.214 }, 00:10:48.214 { 00:10:48.214 "nsid": 2, 00:10:48.214 "bdev_name": "Malloc3", 00:10:48.214 "name": "Malloc3", 00:10:48.214 "nguid": "3E5820200F9240EC932CF7358C5FBE89", 00:10:48.214 "uuid": "3e582020-0f92-40ec-932c-f7358c5fbe89" 00:10:48.214 } 00:10:48.214 ] 00:10:48.214 }, 00:10:48.214 { 00:10:48.214 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:48.214 "subtype": "NVMe", 00:10:48.214 "listen_addresses": [ 00:10:48.214 { 00:10:48.214 "trtype": "VFIOUSER", 00:10:48.214 "adrfam": "IPv4", 00:10:48.214 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:48.214 "trsvcid": "0" 00:10:48.214 } 00:10:48.214 ], 00:10:48.214 "allow_any_host": true, 00:10:48.214 "hosts": [], 00:10:48.214 "serial_number": "SPDK2", 00:10:48.214 "model_number": "SPDK bdev Controller", 00:10:48.214 "max_namespaces": 32, 00:10:48.214 "min_cntlid": 1, 00:10:48.214 "max_cntlid": 65519, 00:10:48.214 "namespaces": [ 00:10:48.214 { 00:10:48.214 "nsid": 1, 00:10:48.214 "bdev_name": "Malloc2", 00:10:48.214 "name": "Malloc2", 00:10:48.214 "nguid": "6886E9D3C0784261B4E0C3BB4AAAB4BE", 00:10:48.214 "uuid": "6886e9d3-c078-4261-b4e0-c3bb4aaab4be" 00:10:48.214 }, 00:10:48.214 { 00:10:48.214 "nsid": 2, 00:10:48.214 "bdev_name": "Malloc4", 00:10:48.214 "name": "Malloc4", 00:10:48.214 "nguid": "FC1EFB22BF46408B9C0640F030AD73A6", 00:10:48.214 "uuid": "fc1efb22-bf46-408b-9c06-40f030ad73a6" 00:10:48.214 } 00:10:48.214 ] 00:10:48.214 } 00:10:48.214 ] 00:10:48.214 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2258366 00:10:48.214 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:48.214 10:23:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2252809 00:10:48.214 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2252809 ']' 00:10:48.214 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2252809 00:10:48.214 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:48.215 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:48.215 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2252809 00:10:48.472 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:48.472 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:48.472 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2252809' 00:10:48.473 killing process with pid 2252809 00:10:48.473 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2252809 00:10:48.473 10:23:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2252809 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2258559 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2258559' 00:10:48.748 Process pid: 2258559 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2258559 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2258559 ']' 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.748 10:23:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:48.748 [2024-07-15 10:23:43.313895] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:48.748 [2024-07-15 10:23:43.314944] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:48.748 [2024-07-15 10:23:43.315017] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.748 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.748 [2024-07-15 10:23:43.378216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.006 [2024-07-15 10:23:43.496352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.006 [2024-07-15 10:23:43.496407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.006 [2024-07-15 10:23:43.496431] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.006 [2024-07-15 10:23:43.496452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.006 [2024-07-15 10:23:43.496464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.006 [2024-07-15 10:23:43.496551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.006 [2024-07-15 10:23:43.496619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.006 [2024-07-15 10:23:43.496713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.006 [2024-07-15 10:23:43.496715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.006 [2024-07-15 10:23:43.608211] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:49.006 [2024-07-15 10:23:43.608425] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:49.006 [2024-07-15 10:23:43.608713] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:49.006 [2024-07-15 10:23:43.609365] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:49.006 [2024-07-15 10:23:43.609599] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:49.935 10:23:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.935 10:23:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:49.935 10:23:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:50.864 10:23:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:51.122 10:23:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:51.122 10:23:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:51.122 10:23:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:51.122 10:23:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:51.122 10:23:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:51.382 Malloc1 00:10:51.382 10:23:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:51.640 10:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:51.897 10:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:52.153 10:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:52.153 10:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:52.153 10:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:52.410 Malloc2 00:10:52.410 10:23:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:52.667 10:23:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:52.925 10:23:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2258559 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2258559 ']' 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2258559 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2258559 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2258559' 00:10:53.182 killing process with pid 2258559 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2258559 00:10:53.182 10:23:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2258559 00:10:53.746 10:23:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:53.747 00:10:53.747 real 0m53.734s 00:10:53.747 user 3m31.653s 00:10:53.747 sys 0m4.821s 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:53.747 ************************************ 00:10:53.747 END TEST nvmf_vfio_user 00:10:53.747 ************************************ 00:10:53.747 10:23:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:53.747 10:23:48 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:53.747 10:23:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:53.747 10:23:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.747 10:23:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:53.747 ************************************ 00:10:53.747 START TEST nvmf_vfio_user_nvme_compliance 00:10:53.747 ************************************ 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:53.747 * Looking for test storage... 00:10:53.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2259194 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2259194' 00:10:53.747 Process pid: 2259194 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2259194 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2259194 ']' 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.747 10:23:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:53.747 [2024-07-15 10:23:48.313582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:53.747 [2024-07-15 10:23:48.313674] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.747 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.747 [2024-07-15 10:23:48.375490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:54.005 [2024-07-15 10:23:48.495589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.005 [2024-07-15 10:23:48.495665] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.005 [2024-07-15 10:23:48.495681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.005 [2024-07-15 10:23:48.495694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.005 [2024-07-15 10:23:48.495714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.005 [2024-07-15 10:23:48.495808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.005 [2024-07-15 10:23:48.495871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.005 [2024-07-15 10:23:48.495874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.935 10:23:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.935 10:23:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:54.935 10:23:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:55.868 malloc0 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.868 10:23:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:55.868 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.868 00:10:55.868 00:10:55.868 CUnit - A unit testing framework for C - Version 2.1-3 00:10:55.868 http://cunit.sourceforge.net/ 00:10:55.868 00:10:55.868 00:10:55.868 Suite: nvme_compliance 00:10:55.868 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 10:23:50.479439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:55.868 [2024-07-15 10:23:50.480991] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:55.868 [2024-07-15 10:23:50.481030] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:55.868 [2024-07-15 10:23:50.481045] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:55.868 [2024-07-15 10:23:50.482462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.126 passed 00:10:56.126 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 10:23:50.579172] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.126 [2024-07-15 10:23:50.582195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.126 passed 00:10:56.126 Test: admin_identify_ns ...[2024-07-15 10:23:50.678651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.126 [2024-07-15 10:23:50.737897] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:56.126 [2024-07-15 10:23:50.745898] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:56.126 [2024-07-15 10:23:50.767037] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.383 passed 00:10:56.383 Test: admin_get_features_mandatory_features ...[2024-07-15 10:23:50.859553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.383 [2024-07-15 10:23:50.862579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.383 passed 00:10:56.383 Test: admin_get_features_optional_features ...[2024-07-15 10:23:50.957239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.383 [2024-07-15 10:23:50.961270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.383 passed 00:10:56.639 Test: admin_set_features_number_of_queues ...[2024-07-15 10:23:51.055671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.639 [2024-07-15 10:23:51.160003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.639 passed 00:10:56.639 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 10:23:51.255729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.639 [2024-07-15 10:23:51.258761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.896 passed 00:10:56.896 Test: admin_get_log_page_with_lpo ...[2024-07-15 10:23:51.353493] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.896 [2024-07-15 10:23:51.420896] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:56.896 [2024-07-15 10:23:51.433983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:56.896 passed 00:10:56.896 Test: fabric_property_get ...[2024-07-15 10:23:51.528251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:56.896 [2024-07-15 10:23:51.529624] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:56.896 [2024-07-15 10:23:51.532290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.153 passed 00:10:57.153 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 10:23:51.624927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.153 [2024-07-15 10:23:51.626257] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:57.153 [2024-07-15 10:23:51.628948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.153 passed 00:10:57.153 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 10:23:51.724693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.411 [2024-07-15 10:23:51.808890] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:57.411 [2024-07-15 10:23:51.824891] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:57.411 [2024-07-15 10:23:51.829996] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.411 passed 00:10:57.411 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 10:23:51.922991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.411 [2024-07-15 10:23:51.924319] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:57.411 [2024-07-15 10:23:51.926012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.411 passed 00:10:57.411 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 10:23:52.020707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.675 [2024-07-15 10:23:52.093889] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:57.675 [2024-07-15 10:23:52.117889] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:57.675 [2024-07-15 10:23:52.123010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.675 passed 00:10:57.675 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 10:23:52.220208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.675 [2024-07-15 10:23:52.221553] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:57.675 [2024-07-15 10:23:52.221602] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:57.675 [2024-07-15 10:23:52.223230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.675 passed 00:10:57.675 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 10:23:52.313944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.972 [2024-07-15 10:23:52.406905] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:57.972 [2024-07-15 10:23:52.414888] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:57.972 [2024-07-15 10:23:52.422891] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:57.972 [2024-07-15 10:23:52.430889] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:57.972 [2024-07-15 10:23:52.460005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:57.972 passed 00:10:57.972 Test: admin_create_io_sq_verify_pc ...[2024-07-15 10:23:52.548998] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:57.972 [2024-07-15 10:23:52.568903] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:57.972 [2024-07-15 10:23:52.586529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:58.235 passed 00:10:58.235 Test: admin_create_io_qp_max_qps ...[2024-07-15 10:23:52.679183] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:59.179 [2024-07-15 10:23:53.782897] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:59.744 [2024-07-15 10:23:54.159882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:59.744 passed 00:10:59.744 Test: admin_create_io_sq_shared_cq ...[2024-07-15 10:23:54.255757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:59.744 [2024-07-15 10:23:54.386885] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:00.002 [2024-07-15 10:23:54.423994] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:00.002 passed 00:11:00.002 00:11:00.002 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.002 suites 1 1 n/a 0 0 00:11:00.002 tests 18 18 18 0 0 00:11:00.002 asserts 360 360 360 0 n/a 00:11:00.002 00:11:00.002 Elapsed time = 1.655 seconds 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2259194 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2259194 ']' 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2259194 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2259194 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2259194' 00:11:00.002 killing process with pid 2259194 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2259194 00:11:00.002 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2259194 00:11:00.261 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:00.261 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:00.261 00:11:00.261 real 0m6.617s 00:11:00.261 user 0m18.780s 00:11:00.261 sys 0m0.595s 00:11:00.261 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.261 10:23:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:00.261 ************************************ 00:11:00.261 END TEST nvmf_vfio_user_nvme_compliance 00:11:00.261 ************************************ 00:11:00.261 10:23:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:00.261 10:23:54 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:00.261 10:23:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:00.261 10:23:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.261 10:23:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.261 ************************************ 00:11:00.261 START TEST nvmf_vfio_user_fuzz 00:11:00.261 ************************************ 00:11:00.261 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:00.520 * Looking for test storage... 00:11:00.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.520 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2260054 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2260054' 00:11:00.521 Process pid: 2260054 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2260054 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2260054 ']' 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.521 10:23:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:00.782 10:23:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.782 10:23:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:11:00.782 10:23:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:01.716 malloc0 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:01.716 10:23:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:33.777 Fuzzing completed. Shutting down the fuzz application 00:11:33.777 00:11:33.777 Dumping successful admin opcodes: 00:11:33.777 8, 9, 10, 24, 00:11:33.777 Dumping successful io opcodes: 00:11:33.777 0, 00:11:33.777 NS: 0x200003a1ef00 I/O qp, Total commands completed: 621353, total successful commands: 2405, random_seed: 655401600 00:11:33.777 NS: 0x200003a1ef00 admin qp, Total commands completed: 102815, total successful commands: 849, random_seed: 1931596928 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2260054 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2260054 ']' 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2260054 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2260054 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2260054' 00:11:33.777 killing process with pid 2260054 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2260054 00:11:33.777 10:24:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2260054 00:11:33.777 10:24:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:33.777 10:24:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:33.777 00:11:33.777 real 0m32.357s 00:11:33.777 user 0m31.742s 00:11:33.777 sys 0m30.067s 00:11:33.777 10:24:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.777 10:24:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:33.777 ************************************ 00:11:33.777 END TEST nvmf_vfio_user_fuzz 00:11:33.777 ************************************ 00:11:33.777 10:24:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:33.777 10:24:27 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:33.777 10:24:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:33.777 10:24:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.777 10:24:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:33.777 ************************************ 00:11:33.777 START TEST nvmf_host_management 00:11:33.777 ************************************ 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:33.777 * Looking for test storage... 00:11:33.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.777 10:24:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.778 10:24:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.778 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:33.778 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:33.778 10:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:33.778 10:24:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:34.709 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.709 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:34.710 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:34.710 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:34.710 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.710 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:11:34.968 00:11:34.968 --- 10.0.0.2 ping statistics --- 00:11:34.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.968 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:11:34.968 00:11:34.968 --- 10.0.0.1 ping statistics --- 00:11:34.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.968 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:34.968 10:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:34.969 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2266121 00:11:34.969 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:34.969 10:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2266121 00:11:34.969 10:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2266121 ']' 00:11:34.969 10:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.969 10:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.969 10:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.969 10:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.969 10:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:34.969 [2024-07-15 10:24:29.526019] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:34.969 [2024-07-15 10:24:29.526108] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.969 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.969 [2024-07-15 10:24:29.593665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.227 [2024-07-15 10:24:29.712231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.227 [2024-07-15 10:24:29.712294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.227 [2024-07-15 10:24:29.712320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.227 [2024-07-15 10:24:29.712334] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.227 [2024-07-15 10:24:29.712346] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.227 [2024-07-15 10:24:29.712439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.227 [2024-07-15 10:24:29.712563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.227 [2024-07-15 10:24:29.712628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:35.227 [2024-07-15 10:24:29.712631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.196 [2024-07-15 10:24:30.504969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.196 Malloc0 00:11:36.196 [2024-07-15 10:24:30.570112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2266295 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2266295 /var/tmp/bdevperf.sock 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2266295 ']' 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:36.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:36.196 { 00:11:36.196 "params": { 00:11:36.196 "name": "Nvme$subsystem", 00:11:36.196 "trtype": "$TEST_TRANSPORT", 00:11:36.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:36.196 "adrfam": "ipv4", 00:11:36.196 "trsvcid": "$NVMF_PORT", 00:11:36.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:36.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:36.196 "hdgst": ${hdgst:-false}, 00:11:36.196 "ddgst": ${ddgst:-false} 00:11:36.196 }, 00:11:36.196 "method": "bdev_nvme_attach_controller" 00:11:36.196 } 00:11:36.196 EOF 00:11:36.196 )") 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:36.196 10:24:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:36.196 "params": { 00:11:36.196 "name": "Nvme0", 00:11:36.196 "trtype": "tcp", 00:11:36.196 "traddr": "10.0.0.2", 00:11:36.196 "adrfam": "ipv4", 00:11:36.196 "trsvcid": "4420", 00:11:36.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:36.196 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:36.196 "hdgst": false, 00:11:36.196 "ddgst": false 00:11:36.196 }, 00:11:36.196 "method": "bdev_nvme_attach_controller" 00:11:36.196 }' 00:11:36.196 [2024-07-15 10:24:30.650812] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:36.196 [2024-07-15 10:24:30.650912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266295 ] 00:11:36.196 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.196 [2024-07-15 10:24:30.711851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.196 [2024-07-15 10:24:30.822983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.454 Running I/O for 10 seconds... 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.022 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.022 [2024-07-15 10:24:31.653763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.022 [2024-07-15 10:24:31.653818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.022 [2024-07-15 10:24:31.653860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.022 [2024-07-15 10:24:31.653884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.022 [2024-07-15 10:24:31.653910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.022 [2024-07-15 10:24:31.653925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.022 [2024-07-15 10:24:31.653940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.022 [2024-07-15 10:24:31.653954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.022 [2024-07-15 10:24:31.653980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.022 [2024-07-15 10:24:31.653995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.022 [2024-07-15 10:24:31.654011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.022 [2024-07-15 10:24:31.654025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.022 [2024-07-15 10:24:31.654041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.022 [2024-07-15 10:24:31.654054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.022 [2024-07-15 10:24:31.654070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.022 [2024-07-15 10:24:31.654085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.022 [2024-07-15 10:24:31.654101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.654977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.654994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.023 [2024-07-15 10:24:31.655439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.023 [2024-07-15 10:24:31.655455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:37.024 [2024-07-15 10:24:31.655900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.655916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae900 is same with the state(5) to be set 00:11:37.024 [2024-07-15 10:24:31.655992] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14ae900 was disconnected and freed. reset controller. 00:11:37.024 [2024-07-15 10:24:31.656059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.024 [2024-07-15 10:24:31.656083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.656100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.024 [2024-07-15 10:24:31.656124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.656140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.024 [2024-07-15 10:24:31.656155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.656179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.024 [2024-07-15 10:24:31.656193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.024 [2024-07-15 10:24:31.656207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109d790 is same with the state(5) to be set 00:11:37.024 [2024-07-15 10:24:31.657349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:37.024 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.024 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:37.024 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.024 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:37.024 task offset: 121984 on job bdev=Nvme0n1 fails 00:11:37.024 00:11:37.024 Latency(us) 00:11:37.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.024 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:37.024 Job: Nvme0n1 ended in about 0.60 seconds with error 00:11:37.024 Verification LBA range: start 0x0 length 0x400 00:11:37.024 Nvme0n1 : 0.60 1493.60 93.35 106.69 0.00 39140.38 2973.39 33593.27 00:11:37.024 =================================================================================================================== 00:11:37.024 Total : 1493.60 93.35 106.69 0.00 39140.38 2973.39 33593.27 00:11:37.024 [2024-07-15 10:24:31.659212] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:37.024 [2024-07-15 10:24:31.659251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109d790 (9): Bad file descriptor 00:11:37.024 10:24:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.024 10:24:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:37.024 [2024-07-15 10:24:31.670252] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2266295 00:11:38.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2266295) - No such process 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:38.395 { 00:11:38.395 "params": { 00:11:38.395 "name": "Nvme$subsystem", 00:11:38.395 "trtype": "$TEST_TRANSPORT", 00:11:38.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:38.395 "adrfam": "ipv4", 00:11:38.395 "trsvcid": "$NVMF_PORT", 00:11:38.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:38.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:38.395 "hdgst": ${hdgst:-false}, 00:11:38.395 "ddgst": ${ddgst:-false} 00:11:38.395 }, 00:11:38.395 "method": "bdev_nvme_attach_controller" 00:11:38.395 } 00:11:38.395 EOF 00:11:38.395 )") 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:38.395 10:24:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:38.395 "params": { 00:11:38.395 "name": "Nvme0", 00:11:38.395 "trtype": "tcp", 00:11:38.395 "traddr": "10.0.0.2", 00:11:38.395 "adrfam": "ipv4", 00:11:38.395 "trsvcid": "4420", 00:11:38.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:38.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:38.395 "hdgst": false, 00:11:38.395 "ddgst": false 00:11:38.395 }, 00:11:38.395 "method": "bdev_nvme_attach_controller" 00:11:38.395 }' 00:11:38.395 [2024-07-15 10:24:32.716572] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:38.395 [2024-07-15 10:24:32.716663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266567 ] 00:11:38.395 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.395 [2024-07-15 10:24:32.778634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.395 [2024-07-15 10:24:32.889496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.654 Running I/O for 1 seconds... 00:11:39.618 00:11:39.618 Latency(us) 00:11:39.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.618 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:39.618 Verification LBA range: start 0x0 length 0x400 00:11:39.618 Nvme0n1 : 1.01 1534.91 95.93 0.00 0.00 40891.34 2026.76 32816.55 00:11:39.618 =================================================================================================================== 00:11:39.618 Total : 1534.91 95.93 0.00 0.00 40891.34 2026.76 32816.55 00:11:39.875 10:24:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:39.875 10:24:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.876 rmmod nvme_tcp 00:11:39.876 rmmod nvme_fabrics 00:11:39.876 rmmod nvme_keyring 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2266121 ']' 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2266121 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2266121 ']' 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2266121 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2266121 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2266121' 00:11:39.876 killing process with pid 2266121 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2266121 00:11:39.876 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2266121 00:11:40.135 [2024-07-15 10:24:34.679994] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:40.135 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:40.135 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:40.135 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:40.135 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:40.135 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:40.135 10:24:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.135 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.135 10:24:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.666 10:24:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:42.666 10:24:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:42.666 00:11:42.666 real 0m9.485s 00:11:42.666 user 0m23.281s 00:11:42.666 sys 0m2.725s 00:11:42.666 10:24:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:42.666 10:24:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 ************************************ 00:11:42.666 END TEST nvmf_host_management 00:11:42.666 ************************************ 00:11:42.666 10:24:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:42.666 10:24:36 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:42.666 10:24:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:42.666 10:24:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.666 10:24:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 ************************************ 00:11:42.666 START TEST nvmf_lvol 00:11:42.666 ************************************ 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:42.666 * Looking for test storage... 00:11:42.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:42.666 10:24:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:44.567 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:44.567 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:44.567 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:44.567 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.567 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:44.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:11:44.568 00:11:44.568 --- 10.0.0.2 ping statistics --- 00:11:44.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.568 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:11:44.568 00:11:44.568 --- 10.0.0.1 ping statistics --- 00:11:44.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.568 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2268649 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2268649 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2268649 ']' 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.568 10:24:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:44.568 [2024-07-15 10:24:39.048542] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:44.568 [2024-07-15 10:24:39.048639] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.568 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.568 [2024-07-15 10:24:39.116902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:44.826 [2024-07-15 10:24:39.233099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.826 [2024-07-15 10:24:39.233142] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.826 [2024-07-15 10:24:39.233181] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.826 [2024-07-15 10:24:39.233195] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.826 [2024-07-15 10:24:39.233207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.826 [2024-07-15 10:24:39.233303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.826 [2024-07-15 10:24:39.233360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.826 [2024-07-15 10:24:39.233356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.392 10:24:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.392 10:24:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:45.392 10:24:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:45.392 10:24:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:45.392 10:24:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:45.392 10:24:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.392 10:24:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:45.650 [2024-07-15 10:24:40.271647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.650 10:24:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:46.215 10:24:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:46.215 10:24:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:46.473 10:24:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:46.473 10:24:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:46.731 10:24:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:46.989 10:24:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=32201a28-24bf-44a7-bbce-ae6c8aadeaf5 00:11:46.989 10:24:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 32201a28-24bf-44a7-bbce-ae6c8aadeaf5 lvol 20 00:11:47.247 10:24:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=20dbe881-acfe-4113-be8c-c2fa9507ca21 00:11:47.247 10:24:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:47.505 10:24:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 20dbe881-acfe-4113-be8c-c2fa9507ca21 00:11:47.763 10:24:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:47.763 [2024-07-15 10:24:42.401822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.021 10:24:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:48.279 10:24:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2269163 00:11:48.279 10:24:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:48.279 10:24:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:48.279 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.210 10:24:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 20dbe881-acfe-4113-be8c-c2fa9507ca21 MY_SNAPSHOT 00:11:49.469 10:24:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9e7af268-1049-46de-bfe7-fdea42cb6336 00:11:49.469 10:24:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 20dbe881-acfe-4113-be8c-c2fa9507ca21 30 00:11:49.726 10:24:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9e7af268-1049-46de-bfe7-fdea42cb6336 MY_CLONE 00:11:49.984 10:24:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0eb1d544-03ab-4d97-bddd-901af8c8afba 00:11:49.984 10:24:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0eb1d544-03ab-4d97-bddd-901af8c8afba 00:11:50.917 10:24:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2269163 00:11:59.060 Initializing NVMe Controllers 00:11:59.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:59.060 Controller IO queue size 128, less than required. 00:11:59.060 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:59.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:59.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:59.060 Initialization complete. Launching workers. 00:11:59.060 ======================================================== 00:11:59.060 Latency(us) 00:11:59.060 Device Information : IOPS MiB/s Average min max 00:11:59.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10740.25 41.95 11921.09 1263.50 76794.98 00:11:59.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10667.65 41.67 12003.98 2057.95 72737.37 00:11:59.060 ======================================================== 00:11:59.060 Total : 21407.90 83.62 11962.39 1263.50 76794.98 00:11:59.060 00:11:59.060 10:24:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:59.061 10:24:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 20dbe881-acfe-4113-be8c-c2fa9507ca21 00:11:59.061 10:24:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32201a28-24bf-44a7-bbce-ae6c8aadeaf5 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:59.341 rmmod nvme_tcp 00:11:59.341 rmmod nvme_fabrics 00:11:59.341 rmmod nvme_keyring 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2268649 ']' 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2268649 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2268649 ']' 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2268649 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2268649 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2268649' 00:11:59.341 killing process with pid 2268649 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2268649 00:11:59.341 10:24:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2268649 00:11:59.599 10:24:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:59.599 10:24:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:59.599 10:24:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:59.599 10:24:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.599 10:24:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:59.599 10:24:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.599 10:24:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.599 10:24:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.134 10:24:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:02.134 00:12:02.134 real 0m19.451s 00:12:02.134 user 1m6.362s 00:12:02.134 sys 0m5.574s 00:12:02.134 10:24:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:02.134 10:24:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:02.134 ************************************ 00:12:02.134 END TEST nvmf_lvol 00:12:02.134 ************************************ 00:12:02.135 10:24:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:02.135 10:24:56 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:02.135 10:24:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:02.135 10:24:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.135 10:24:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.135 ************************************ 00:12:02.135 START TEST nvmf_lvs_grow 00:12:02.135 ************************************ 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:02.135 * Looking for test storage... 00:12:02.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:02.135 10:24:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:04.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:04.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:04.038 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:04.038 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.038 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:04.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:12:04.039 00:12:04.039 --- 10.0.0.2 ping statistics --- 00:12:04.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.039 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:12:04.039 00:12:04.039 --- 10.0.0.1 ping statistics --- 00:12:04.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.039 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2272424 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2272424 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2272424 ']' 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.039 10:24:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:04.039 [2024-07-15 10:24:58.580014] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:04.039 [2024-07-15 10:24:58.580095] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.039 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.039 [2024-07-15 10:24:58.652413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.298 [2024-07-15 10:24:58.767633] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.298 [2024-07-15 10:24:58.767694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.298 [2024-07-15 10:24:58.767711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.298 [2024-07-15 10:24:58.767724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.298 [2024-07-15 10:24:58.767735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.298 [2024-07-15 10:24:58.767765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:05.232 [2024-07-15 10:24:59.815533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:05.232 ************************************ 00:12:05.232 START TEST lvs_grow_clean 00:12:05.232 ************************************ 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:05.232 10:24:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:05.798 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:05.798 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:06.057 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:06.057 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:06.057 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:06.315 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:06.315 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:06.315 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5c65f93b-df84-424b-85bc-89c9d395deb7 lvol 150 00:12:06.573 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=76ea35d0-ed71-4c09-a89b-d7d29b89f0e7 00:12:06.573 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.573 10:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:06.830 [2024-07-15 10:25:01.236286] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:06.830 [2024-07-15 10:25:01.236371] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:06.830 true 00:12:06.830 10:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:06.830 10:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:07.088 10:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:07.088 10:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:07.346 10:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 76ea35d0-ed71-4c09-a89b-d7d29b89f0e7 00:12:07.604 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:07.604 [2024-07-15 10:25:02.239391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.861 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.119 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2272917 00:12:08.119 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:08.120 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:08.120 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2272917 /var/tmp/bdevperf.sock 00:12:08.120 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2272917 ']' 00:12:08.120 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:08.120 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.120 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:08.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:08.120 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.120 10:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:08.120 [2024-07-15 10:25:02.593299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:08.120 [2024-07-15 10:25:02.593374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272917 ] 00:12:08.120 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.120 [2024-07-15 10:25:02.656056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.378 [2024-07-15 10:25:02.772996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.942 10:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.942 10:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:12:08.942 10:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:09.507 Nvme0n1 00:12:09.507 10:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:09.766 [ 00:12:09.766 { 00:12:09.766 "name": "Nvme0n1", 00:12:09.766 "aliases": [ 00:12:09.766 "76ea35d0-ed71-4c09-a89b-d7d29b89f0e7" 00:12:09.766 ], 00:12:09.766 "product_name": "NVMe disk", 00:12:09.766 "block_size": 4096, 00:12:09.766 "num_blocks": 38912, 00:12:09.766 "uuid": "76ea35d0-ed71-4c09-a89b-d7d29b89f0e7", 00:12:09.766 "assigned_rate_limits": { 00:12:09.766 "rw_ios_per_sec": 0, 00:12:09.766 "rw_mbytes_per_sec": 0, 00:12:09.766 "r_mbytes_per_sec": 0, 00:12:09.766 "w_mbytes_per_sec": 0 00:12:09.766 }, 00:12:09.766 "claimed": false, 00:12:09.766 "zoned": false, 00:12:09.766 "supported_io_types": { 00:12:09.766 "read": true, 00:12:09.766 "write": true, 00:12:09.766 "unmap": true, 00:12:09.766 "flush": true, 00:12:09.766 "reset": true, 00:12:09.766 "nvme_admin": true, 00:12:09.766 "nvme_io": true, 00:12:09.766 "nvme_io_md": false, 00:12:09.766 "write_zeroes": true, 00:12:09.766 "zcopy": false, 00:12:09.766 "get_zone_info": false, 00:12:09.766 "zone_management": false, 00:12:09.766 "zone_append": false, 00:12:09.766 "compare": true, 00:12:09.766 "compare_and_write": true, 00:12:09.766 "abort": true, 00:12:09.766 "seek_hole": false, 00:12:09.766 "seek_data": false, 00:12:09.766 "copy": true, 00:12:09.766 "nvme_iov_md": false 00:12:09.766 }, 00:12:09.766 "memory_domains": [ 00:12:09.766 { 00:12:09.766 "dma_device_id": "system", 00:12:09.766 "dma_device_type": 1 00:12:09.766 } 00:12:09.766 ], 00:12:09.766 "driver_specific": { 00:12:09.766 "nvme": [ 00:12:09.766 { 00:12:09.766 "trid": { 00:12:09.766 "trtype": "TCP", 00:12:09.766 "adrfam": "IPv4", 00:12:09.766 "traddr": "10.0.0.2", 00:12:09.766 "trsvcid": "4420", 00:12:09.766 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:09.766 }, 00:12:09.766 "ctrlr_data": { 00:12:09.766 "cntlid": 1, 00:12:09.766 "vendor_id": "0x8086", 00:12:09.766 "model_number": "SPDK bdev Controller", 00:12:09.766 "serial_number": "SPDK0", 00:12:09.766 "firmware_revision": "24.09", 00:12:09.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:09.766 "oacs": { 00:12:09.766 "security": 0, 00:12:09.766 "format": 0, 00:12:09.766 "firmware": 0, 00:12:09.766 "ns_manage": 0 00:12:09.766 }, 00:12:09.766 "multi_ctrlr": true, 00:12:09.766 "ana_reporting": false 00:12:09.766 }, 00:12:09.766 "vs": { 00:12:09.766 "nvme_version": "1.3" 00:12:09.766 }, 00:12:09.766 "ns_data": { 00:12:09.766 "id": 1, 00:12:09.766 "can_share": true 00:12:09.766 } 00:12:09.766 } 00:12:09.766 ], 00:12:09.766 "mp_policy": "active_passive" 00:12:09.766 } 00:12:09.766 } 00:12:09.766 ] 00:12:09.766 10:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2273174 00:12:09.766 10:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:09.766 10:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:10.023 Running I/O for 10 seconds... 00:12:10.957 Latency(us) 00:12:10.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.957 Nvme0n1 : 1.00 14430.00 56.37 0.00 0.00 0.00 0.00 0.00 00:12:10.957 =================================================================================================================== 00:12:10.957 Total : 14430.00 56.37 0.00 0.00 0.00 0.00 0.00 00:12:10.957 00:12:11.890 10:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:11.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.890 Nvme0n1 : 2.00 14652.50 57.24 0.00 0.00 0.00 0.00 0.00 00:12:11.890 =================================================================================================================== 00:12:11.890 Total : 14652.50 57.24 0.00 0.00 0.00 0.00 0.00 00:12:11.890 00:12:12.146 true 00:12:12.146 10:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:12.146 10:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:12.404 10:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:12.404 10:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:12.404 10:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2273174 00:12:12.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.970 Nvme0n1 : 3.00 14852.33 58.02 0.00 0.00 0.00 0.00 0.00 00:12:12.970 =================================================================================================================== 00:12:12.970 Total : 14852.33 58.02 0.00 0.00 0.00 0.00 0.00 00:12:12.970 00:12:13.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.904 Nvme0n1 : 4.00 14919.00 58.28 0.00 0.00 0.00 0.00 0.00 00:12:13.904 =================================================================================================================== 00:12:13.904 Total : 14919.00 58.28 0.00 0.00 0.00 0.00 0.00 00:12:13.904 00:12:14.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.868 Nvme0n1 : 5.00 14947.20 58.39 0.00 0.00 0.00 0.00 0.00 00:12:14.868 =================================================================================================================== 00:12:14.868 Total : 14947.20 58.39 0.00 0.00 0.00 0.00 0.00 00:12:14.868 00:12:16.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.256 Nvme0n1 : 6.00 15017.83 58.66 0.00 0.00 0.00 0.00 0.00 00:12:16.256 =================================================================================================================== 00:12:16.256 Total : 15017.83 58.66 0.00 0.00 0.00 0.00 0.00 00:12:16.256 00:12:16.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.818 Nvme0n1 : 7.00 14853.14 58.02 0.00 0.00 0.00 0.00 0.00 00:12:16.818 =================================================================================================================== 00:12:16.818 Total : 14853.14 58.02 0.00 0.00 0.00 0.00 0.00 00:12:16.818 00:12:18.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.191 Nvme0n1 : 8.00 14716.50 57.49 0.00 0.00 0.00 0.00 0.00 00:12:18.191 =================================================================================================================== 00:12:18.191 Total : 14716.50 57.49 0.00 0.00 0.00 0.00 0.00 00:12:18.191 00:12:19.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.137 Nvme0n1 : 9.00 14621.78 57.12 0.00 0.00 0.00 0.00 0.00 00:12:19.137 =================================================================================================================== 00:12:19.137 Total : 14621.78 57.12 0.00 0.00 0.00 0.00 0.00 00:12:19.137 00:12:20.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.070 Nvme0n1 : 10.00 14546.80 56.82 0.00 0.00 0.00 0.00 0.00 00:12:20.070 =================================================================================================================== 00:12:20.070 Total : 14546.80 56.82 0.00 0.00 0.00 0.00 0.00 00:12:20.070 00:12:20.070 00:12:20.070 Latency(us) 00:12:20.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.070 Nvme0n1 : 10.01 14542.81 56.81 0.00 0.00 8793.05 2560.76 16602.45 00:12:20.070 =================================================================================================================== 00:12:20.070 Total : 14542.81 56.81 0.00 0.00 8793.05 2560.76 16602.45 00:12:20.070 0 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2272917 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2272917 ']' 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2272917 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2272917 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2272917' 00:12:20.070 killing process with pid 2272917 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2272917 00:12:20.070 Received shutdown signal, test time was about 10.000000 seconds 00:12:20.070 00:12:20.070 Latency(us) 00:12:20.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.070 =================================================================================================================== 00:12:20.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:20.070 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2272917 00:12:20.328 10:25:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:20.586 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:20.844 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:20.844 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:21.102 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:21.102 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:21.103 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:21.361 [2024-07-15 10:25:15.886675] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:21.361 10:25:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:21.619 request: 00:12:21.619 { 00:12:21.619 "uuid": "5c65f93b-df84-424b-85bc-89c9d395deb7", 00:12:21.619 "method": "bdev_lvol_get_lvstores", 00:12:21.619 "req_id": 1 00:12:21.619 } 00:12:21.619 Got JSON-RPC error response 00:12:21.619 response: 00:12:21.619 { 00:12:21.619 "code": -19, 00:12:21.619 "message": "No such device" 00:12:21.619 } 00:12:21.619 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:21.619 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:21.619 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:21.619 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:21.619 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:21.877 aio_bdev 00:12:21.877 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 76ea35d0-ed71-4c09-a89b-d7d29b89f0e7 00:12:21.877 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=76ea35d0-ed71-4c09-a89b-d7d29b89f0e7 00:12:21.877 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:21.877 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:21.877 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:21.877 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:21.877 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:22.135 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 76ea35d0-ed71-4c09-a89b-d7d29b89f0e7 -t 2000 00:12:22.394 [ 00:12:22.394 { 00:12:22.394 "name": "76ea35d0-ed71-4c09-a89b-d7d29b89f0e7", 00:12:22.394 "aliases": [ 00:12:22.394 "lvs/lvol" 00:12:22.394 ], 00:12:22.394 "product_name": "Logical Volume", 00:12:22.394 "block_size": 4096, 00:12:22.394 "num_blocks": 38912, 00:12:22.394 "uuid": "76ea35d0-ed71-4c09-a89b-d7d29b89f0e7", 00:12:22.394 "assigned_rate_limits": { 00:12:22.394 "rw_ios_per_sec": 0, 00:12:22.394 "rw_mbytes_per_sec": 0, 00:12:22.394 "r_mbytes_per_sec": 0, 00:12:22.395 "w_mbytes_per_sec": 0 00:12:22.395 }, 00:12:22.395 "claimed": false, 00:12:22.395 "zoned": false, 00:12:22.395 "supported_io_types": { 00:12:22.395 "read": true, 00:12:22.395 "write": true, 00:12:22.395 "unmap": true, 00:12:22.395 "flush": false, 00:12:22.395 "reset": true, 00:12:22.395 "nvme_admin": false, 00:12:22.395 "nvme_io": false, 00:12:22.395 "nvme_io_md": false, 00:12:22.395 "write_zeroes": true, 00:12:22.395 "zcopy": false, 00:12:22.395 "get_zone_info": false, 00:12:22.395 "zone_management": false, 00:12:22.395 "zone_append": false, 00:12:22.395 "compare": false, 00:12:22.395 "compare_and_write": false, 00:12:22.395 "abort": false, 00:12:22.395 "seek_hole": true, 00:12:22.395 "seek_data": true, 00:12:22.395 "copy": false, 00:12:22.395 "nvme_iov_md": false 00:12:22.395 }, 00:12:22.395 "driver_specific": { 00:12:22.395 "lvol": { 00:12:22.395 "lvol_store_uuid": "5c65f93b-df84-424b-85bc-89c9d395deb7", 00:12:22.395 "base_bdev": "aio_bdev", 00:12:22.395 "thin_provision": false, 00:12:22.395 "num_allocated_clusters": 38, 00:12:22.395 "snapshot": false, 00:12:22.395 "clone": false, 00:12:22.395 "esnap_clone": false 00:12:22.395 } 00:12:22.395 } 00:12:22.395 } 00:12:22.395 ] 00:12:22.395 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:22.395 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:22.395 10:25:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:22.653 10:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:22.653 10:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:22.653 10:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:22.911 10:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:22.911 10:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 76ea35d0-ed71-4c09-a89b-d7d29b89f0e7 00:12:23.169 10:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c65f93b-df84-424b-85bc-89c9d395deb7 00:12:23.427 10:25:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:23.685 00:12:23.685 real 0m18.392s 00:12:23.685 user 0m17.271s 00:12:23.685 sys 0m2.291s 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:23.685 ************************************ 00:12:23.685 END TEST lvs_grow_clean 00:12:23.685 ************************************ 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:23.685 ************************************ 00:12:23.685 START TEST lvs_grow_dirty 00:12:23.685 ************************************ 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:23.685 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:23.686 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:23.686 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:23.686 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:23.686 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:23.686 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:23.686 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:24.251 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:24.252 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:24.252 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:24.252 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:24.252 10:25:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:24.509 10:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:24.509 10:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:24.509 10:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 lvol 150 00:12:24.767 10:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=94ac96a6-bd8f-4164-a7ac-1b938aba6a85 00:12:24.767 10:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:24.767 10:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:25.025 [2024-07-15 10:25:19.605104] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:25.025 [2024-07-15 10:25:19.605200] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:25.025 true 00:12:25.025 10:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:25.025 10:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:25.283 10:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:25.283 10:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:25.541 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 94ac96a6-bd8f-4164-a7ac-1b938aba6a85 00:12:25.799 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:26.057 [2024-07-15 10:25:20.604192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.057 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.315 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2275111 00:12:26.315 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:26.315 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:26.315 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2275111 /var/tmp/bdevperf.sock 00:12:26.315 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2275111 ']' 00:12:26.315 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.315 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.315 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.315 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.315 10:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:26.315 [2024-07-15 10:25:20.913867] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:26.315 [2024-07-15 10:25:20.913949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275111 ] 00:12:26.315 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.573 [2024-07-15 10:25:20.975293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.573 [2024-07-15 10:25:21.093891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.573 10:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.573 10:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:26.573 10:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:27.138 Nvme0n1 00:12:27.138 10:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:27.395 [ 00:12:27.395 { 00:12:27.395 "name": "Nvme0n1", 00:12:27.395 "aliases": [ 00:12:27.395 "94ac96a6-bd8f-4164-a7ac-1b938aba6a85" 00:12:27.395 ], 00:12:27.395 "product_name": "NVMe disk", 00:12:27.395 "block_size": 4096, 00:12:27.395 "num_blocks": 38912, 00:12:27.395 "uuid": "94ac96a6-bd8f-4164-a7ac-1b938aba6a85", 00:12:27.395 "assigned_rate_limits": { 00:12:27.395 "rw_ios_per_sec": 0, 00:12:27.395 "rw_mbytes_per_sec": 0, 00:12:27.395 "r_mbytes_per_sec": 0, 00:12:27.395 "w_mbytes_per_sec": 0 00:12:27.395 }, 00:12:27.395 "claimed": false, 00:12:27.395 "zoned": false, 00:12:27.395 "supported_io_types": { 00:12:27.395 "read": true, 00:12:27.395 "write": true, 00:12:27.395 "unmap": true, 00:12:27.395 "flush": true, 00:12:27.395 "reset": true, 00:12:27.395 "nvme_admin": true, 00:12:27.395 "nvme_io": true, 00:12:27.395 "nvme_io_md": false, 00:12:27.395 "write_zeroes": true, 00:12:27.395 "zcopy": false, 00:12:27.396 "get_zone_info": false, 00:12:27.396 "zone_management": false, 00:12:27.396 "zone_append": false, 00:12:27.396 "compare": true, 00:12:27.396 "compare_and_write": true, 00:12:27.396 "abort": true, 00:12:27.396 "seek_hole": false, 00:12:27.396 "seek_data": false, 00:12:27.396 "copy": true, 00:12:27.396 "nvme_iov_md": false 00:12:27.396 }, 00:12:27.396 "memory_domains": [ 00:12:27.396 { 00:12:27.396 "dma_device_id": "system", 00:12:27.396 "dma_device_type": 1 00:12:27.396 } 00:12:27.396 ], 00:12:27.396 "driver_specific": { 00:12:27.396 "nvme": [ 00:12:27.396 { 00:12:27.396 "trid": { 00:12:27.396 "trtype": "TCP", 00:12:27.396 "adrfam": "IPv4", 00:12:27.396 "traddr": "10.0.0.2", 00:12:27.396 "trsvcid": "4420", 00:12:27.396 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:27.396 }, 00:12:27.396 "ctrlr_data": { 00:12:27.396 "cntlid": 1, 00:12:27.396 "vendor_id": "0x8086", 00:12:27.396 "model_number": "SPDK bdev Controller", 00:12:27.396 "serial_number": "SPDK0", 00:12:27.396 "firmware_revision": "24.09", 00:12:27.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:27.396 "oacs": { 00:12:27.396 "security": 0, 00:12:27.396 "format": 0, 00:12:27.396 "firmware": 0, 00:12:27.396 "ns_manage": 0 00:12:27.396 }, 00:12:27.396 "multi_ctrlr": true, 00:12:27.396 "ana_reporting": false 00:12:27.396 }, 00:12:27.396 "vs": { 00:12:27.396 "nvme_version": "1.3" 00:12:27.396 }, 00:12:27.396 "ns_data": { 00:12:27.396 "id": 1, 00:12:27.396 "can_share": true 00:12:27.396 } 00:12:27.396 } 00:12:27.396 ], 00:12:27.396 "mp_policy": "active_passive" 00:12:27.396 } 00:12:27.396 } 00:12:27.396 ] 00:12:27.396 10:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2275247 00:12:27.396 10:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:27.396 10:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:27.396 Running I/O for 10 seconds... 00:12:28.769 Latency(us) 00:12:28.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.770 Nvme0n1 : 1.00 14481.00 56.57 0.00 0.00 0.00 0.00 0.00 00:12:28.770 =================================================================================================================== 00:12:28.770 Total : 14481.00 56.57 0.00 0.00 0.00 0.00 0.00 00:12:28.770 00:12:29.336 10:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:29.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.593 Nvme0n1 : 2.00 14652.50 57.24 0.00 0.00 0.00 0.00 0.00 00:12:29.593 =================================================================================================================== 00:12:29.593 Total : 14652.50 57.24 0.00 0.00 0.00 0.00 0.00 00:12:29.593 00:12:29.593 true 00:12:29.593 10:25:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:29.593 10:25:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:29.853 10:25:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:29.853 10:25:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:29.853 10:25:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2275247 00:12:30.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.422 Nvme0n1 : 3.00 14813.67 57.87 0.00 0.00 0.00 0.00 0.00 00:12:30.422 =================================================================================================================== 00:12:30.422 Total : 14813.67 57.87 0.00 0.00 0.00 0.00 0.00 00:12:30.422 00:12:31.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.801 Nvme0n1 : 4.00 14890.75 58.17 0.00 0.00 0.00 0.00 0.00 00:12:31.801 =================================================================================================================== 00:12:31.801 Total : 14890.75 58.17 0.00 0.00 0.00 0.00 0.00 00:12:31.801 00:12:32.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.771 Nvme0n1 : 5.00 14925.20 58.30 0.00 0.00 0.00 0.00 0.00 00:12:32.772 =================================================================================================================== 00:12:32.772 Total : 14925.20 58.30 0.00 0.00 0.00 0.00 0.00 00:12:32.772 00:12:33.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.705 Nvme0n1 : 6.00 15004.17 58.61 0.00 0.00 0.00 0.00 0.00 00:12:33.705 =================================================================================================================== 00:12:33.705 Total : 15004.17 58.61 0.00 0.00 0.00 0.00 0.00 00:12:33.705 00:12:34.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.644 Nvme0n1 : 7.00 15048.00 58.78 0.00 0.00 0.00 0.00 0.00 00:12:34.644 =================================================================================================================== 00:12:34.644 Total : 15048.00 58.78 0.00 0.00 0.00 0.00 0.00 00:12:34.644 00:12:35.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.581 Nvme0n1 : 8.00 15067.50 58.86 0.00 0.00 0.00 0.00 0.00 00:12:35.581 =================================================================================================================== 00:12:35.581 Total : 15067.50 58.86 0.00 0.00 0.00 0.00 0.00 00:12:35.581 00:12:36.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.519 Nvme0n1 : 9.00 15110.11 59.02 0.00 0.00 0.00 0.00 0.00 00:12:36.519 =================================================================================================================== 00:12:36.519 Total : 15110.11 59.02 0.00 0.00 0.00 0.00 0.00 00:12:36.519 00:12:37.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.455 Nvme0n1 : 10.00 15136.40 59.13 0.00 0.00 0.00 0.00 0.00 00:12:37.455 =================================================================================================================== 00:12:37.455 Total : 15136.40 59.13 0.00 0.00 0.00 0.00 0.00 00:12:37.455 00:12:37.455 00:12:37.455 Latency(us) 00:12:37.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.455 Nvme0n1 : 10.01 15141.75 59.15 0.00 0.00 8447.52 3665.16 16019.91 00:12:37.455 =================================================================================================================== 00:12:37.455 Total : 15141.75 59.15 0.00 0.00 8447.52 3665.16 16019.91 00:12:37.455 0 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2275111 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2275111 ']' 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2275111 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2275111 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2275111' 00:12:37.455 killing process with pid 2275111 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2275111 00:12:37.455 Received shutdown signal, test time was about 10.000000 seconds 00:12:37.455 00:12:37.455 Latency(us) 00:12:37.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.455 =================================================================================================================== 00:12:37.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:37.455 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2275111 00:12:38.023 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.023 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:38.281 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:38.281 10:25:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:38.540 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:38.540 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:38.540 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2272424 00:12:38.540 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2272424 00:12:38.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2272424 Killed "${NVMF_APP[@]}" "$@" 00:12:38.540 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:38.540 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:38.540 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.540 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:38.540 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:38.800 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2276574 00:12:38.801 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:38.801 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2276574 00:12:38.801 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2276574 ']' 00:12:38.801 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.801 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.801 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.801 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.801 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:38.801 [2024-07-15 10:25:33.242532] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:38.801 [2024-07-15 10:25:33.242615] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.801 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.801 [2024-07-15 10:25:33.308397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.801 [2024-07-15 10:25:33.414900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.801 [2024-07-15 10:25:33.414955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.801 [2024-07-15 10:25:33.414968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.801 [2024-07-15 10:25:33.414979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.801 [2024-07-15 10:25:33.414989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.801 [2024-07-15 10:25:33.415022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.057 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.057 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:39.057 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.057 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.057 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:39.057 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.057 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:39.317 [2024-07-15 10:25:33.819313] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:39.317 [2024-07-15 10:25:33.819453] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:39.317 [2024-07-15 10:25:33.819511] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:39.317 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:39.317 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 94ac96a6-bd8f-4164-a7ac-1b938aba6a85 00:12:39.317 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=94ac96a6-bd8f-4164-a7ac-1b938aba6a85 00:12:39.317 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:39.317 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:39.317 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:39.317 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:39.317 10:25:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:39.575 10:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 94ac96a6-bd8f-4164-a7ac-1b938aba6a85 -t 2000 00:12:39.834 [ 00:12:39.834 { 00:12:39.834 "name": "94ac96a6-bd8f-4164-a7ac-1b938aba6a85", 00:12:39.834 "aliases": [ 00:12:39.834 "lvs/lvol" 00:12:39.834 ], 00:12:39.835 "product_name": "Logical Volume", 00:12:39.835 "block_size": 4096, 00:12:39.835 "num_blocks": 38912, 00:12:39.835 "uuid": "94ac96a6-bd8f-4164-a7ac-1b938aba6a85", 00:12:39.835 "assigned_rate_limits": { 00:12:39.835 "rw_ios_per_sec": 0, 00:12:39.835 "rw_mbytes_per_sec": 0, 00:12:39.835 "r_mbytes_per_sec": 0, 00:12:39.835 "w_mbytes_per_sec": 0 00:12:39.835 }, 00:12:39.835 "claimed": false, 00:12:39.835 "zoned": false, 00:12:39.835 "supported_io_types": { 00:12:39.835 "read": true, 00:12:39.835 "write": true, 00:12:39.835 "unmap": true, 00:12:39.835 "flush": false, 00:12:39.835 "reset": true, 00:12:39.835 "nvme_admin": false, 00:12:39.835 "nvme_io": false, 00:12:39.835 "nvme_io_md": false, 00:12:39.835 "write_zeroes": true, 00:12:39.835 "zcopy": false, 00:12:39.835 "get_zone_info": false, 00:12:39.835 "zone_management": false, 00:12:39.835 "zone_append": false, 00:12:39.835 "compare": false, 00:12:39.835 "compare_and_write": false, 00:12:39.835 "abort": false, 00:12:39.835 "seek_hole": true, 00:12:39.835 "seek_data": true, 00:12:39.835 "copy": false, 00:12:39.835 "nvme_iov_md": false 00:12:39.835 }, 00:12:39.835 "driver_specific": { 00:12:39.835 "lvol": { 00:12:39.835 "lvol_store_uuid": "ac2f06c8-3bfd-4bf7-89a5-51b85d654d78", 00:12:39.835 "base_bdev": "aio_bdev", 00:12:39.835 "thin_provision": false, 00:12:39.835 "num_allocated_clusters": 38, 00:12:39.835 "snapshot": false, 00:12:39.835 "clone": false, 00:12:39.835 "esnap_clone": false 00:12:39.835 } 00:12:39.835 } 00:12:39.835 } 00:12:39.835 ] 00:12:39.835 10:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:39.835 10:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:39.835 10:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:40.093 10:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:40.093 10:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:40.093 10:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:40.350 10:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:40.350 10:25:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:40.608 [2024-07-15 10:25:35.116093] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:40.608 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:40.608 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:40.608 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:40.609 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.609 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.609 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.609 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.609 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.609 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.609 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:40.609 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:40.609 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:40.866 request: 00:12:40.866 { 00:12:40.866 "uuid": "ac2f06c8-3bfd-4bf7-89a5-51b85d654d78", 00:12:40.866 "method": "bdev_lvol_get_lvstores", 00:12:40.866 "req_id": 1 00:12:40.866 } 00:12:40.866 Got JSON-RPC error response 00:12:40.866 response: 00:12:40.866 { 00:12:40.866 "code": -19, 00:12:40.866 "message": "No such device" 00:12:40.866 } 00:12:40.866 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:40.866 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:40.866 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:40.866 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:40.866 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:41.123 aio_bdev 00:12:41.123 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 94ac96a6-bd8f-4164-a7ac-1b938aba6a85 00:12:41.123 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=94ac96a6-bd8f-4164-a7ac-1b938aba6a85 00:12:41.123 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:41.123 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:41.123 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:41.123 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:41.123 10:25:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:41.688 10:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 94ac96a6-bd8f-4164-a7ac-1b938aba6a85 -t 2000 00:12:41.688 [ 00:12:41.688 { 00:12:41.688 "name": "94ac96a6-bd8f-4164-a7ac-1b938aba6a85", 00:12:41.688 "aliases": [ 00:12:41.688 "lvs/lvol" 00:12:41.688 ], 00:12:41.688 "product_name": "Logical Volume", 00:12:41.688 "block_size": 4096, 00:12:41.688 "num_blocks": 38912, 00:12:41.688 "uuid": "94ac96a6-bd8f-4164-a7ac-1b938aba6a85", 00:12:41.688 "assigned_rate_limits": { 00:12:41.688 "rw_ios_per_sec": 0, 00:12:41.688 "rw_mbytes_per_sec": 0, 00:12:41.688 "r_mbytes_per_sec": 0, 00:12:41.688 "w_mbytes_per_sec": 0 00:12:41.688 }, 00:12:41.688 "claimed": false, 00:12:41.688 "zoned": false, 00:12:41.688 "supported_io_types": { 00:12:41.688 "read": true, 00:12:41.688 "write": true, 00:12:41.688 "unmap": true, 00:12:41.688 "flush": false, 00:12:41.688 "reset": true, 00:12:41.688 "nvme_admin": false, 00:12:41.688 "nvme_io": false, 00:12:41.688 "nvme_io_md": false, 00:12:41.688 "write_zeroes": true, 00:12:41.688 "zcopy": false, 00:12:41.688 "get_zone_info": false, 00:12:41.688 "zone_management": false, 00:12:41.688 "zone_append": false, 00:12:41.688 "compare": false, 00:12:41.688 "compare_and_write": false, 00:12:41.688 "abort": false, 00:12:41.688 "seek_hole": true, 00:12:41.688 "seek_data": true, 00:12:41.688 "copy": false, 00:12:41.688 "nvme_iov_md": false 00:12:41.688 }, 00:12:41.688 "driver_specific": { 00:12:41.688 "lvol": { 00:12:41.688 "lvol_store_uuid": "ac2f06c8-3bfd-4bf7-89a5-51b85d654d78", 00:12:41.688 "base_bdev": "aio_bdev", 00:12:41.688 "thin_provision": false, 00:12:41.688 "num_allocated_clusters": 38, 00:12:41.688 "snapshot": false, 00:12:41.688 "clone": false, 00:12:41.688 "esnap_clone": false 00:12:41.688 } 00:12:41.688 } 00:12:41.688 } 00:12:41.688 ] 00:12:41.688 10:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:41.688 10:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:41.688 10:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:41.947 10:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:41.947 10:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:41.947 10:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:42.213 10:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:42.213 10:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 94ac96a6-bd8f-4164-a7ac-1b938aba6a85 00:12:42.526 10:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac2f06c8-3bfd-4bf7-89a5-51b85d654d78 00:12:42.784 10:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:43.043 10:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:43.043 00:12:43.043 real 0m19.273s 00:12:43.043 user 0m49.941s 00:12:43.043 sys 0m4.905s 00:12:43.043 10:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:43.044 ************************************ 00:12:43.044 END TEST lvs_grow_dirty 00:12:43.044 ************************************ 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:43.044 nvmf_trace.0 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.044 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:43.044 rmmod nvme_tcp 00:12:43.044 rmmod nvme_fabrics 00:12:43.044 rmmod nvme_keyring 00:12:43.303 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.303 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:43.303 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:43.303 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2276574 ']' 00:12:43.303 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2276574 00:12:43.303 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2276574 ']' 00:12:43.303 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2276574 00:12:43.303 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:43.303 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.303 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2276574 00:12:43.304 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:43.304 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:43.304 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2276574' 00:12:43.304 killing process with pid 2276574 00:12:43.304 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2276574 00:12:43.304 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2276574 00:12:43.564 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.564 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.564 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.564 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.564 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.564 10:25:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.564 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.564 10:25:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.472 10:25:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.472 00:12:45.472 real 0m43.722s 00:12:45.472 user 1m13.338s 00:12:45.472 sys 0m9.075s 00:12:45.472 10:25:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.472 10:25:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:45.472 ************************************ 00:12:45.473 END TEST nvmf_lvs_grow 00:12:45.473 ************************************ 00:12:45.473 10:25:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:45.473 10:25:40 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:45.473 10:25:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:45.473 10:25:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.473 10:25:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.473 ************************************ 00:12:45.473 START TEST nvmf_bdev_io_wait 00:12:45.473 ************************************ 00:12:45.473 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:45.730 * Looking for test storage... 00:12:45.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.730 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.731 10:25:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.627 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:47.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:47.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:47.628 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:47.628 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:47.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:12:47.628 00:12:47.628 --- 10.0.0.2 ping statistics --- 00:12:47.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.628 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:12:47.628 00:12:47.628 --- 10.0.0.1 ping statistics --- 00:12:47.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.628 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:47.628 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2279101 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2279101 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2279101 ']' 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.886 10:25:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:47.886 [2024-07-15 10:25:42.333346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:47.886 [2024-07-15 10:25:42.333418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.886 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.887 [2024-07-15 10:25:42.400702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.887 [2024-07-15 10:25:42.518710] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.887 [2024-07-15 10:25:42.518773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.887 [2024-07-15 10:25:42.518799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.887 [2024-07-15 10:25:42.518813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.887 [2024-07-15 10:25:42.518825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.887 [2024-07-15 10:25:42.518949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.887 [2024-07-15 10:25:42.518977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.887 [2024-07-15 10:25:42.519045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.887 [2024-07-15 10:25:42.519047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:48.837 [2024-07-15 10:25:43.423579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:48.837 Malloc0 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.837 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:48.837 [2024-07-15 10:25:43.485171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2279258 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2279260 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:49.095 { 00:12:49.095 "params": { 00:12:49.095 "name": "Nvme$subsystem", 00:12:49.095 "trtype": "$TEST_TRANSPORT", 00:12:49.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:49.095 "adrfam": "ipv4", 00:12:49.095 "trsvcid": "$NVMF_PORT", 00:12:49.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:49.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:49.095 "hdgst": ${hdgst:-false}, 00:12:49.095 "ddgst": ${ddgst:-false} 00:12:49.095 }, 00:12:49.095 "method": "bdev_nvme_attach_controller" 00:12:49.095 } 00:12:49.095 EOF 00:12:49.095 )") 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2279262 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:49.095 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:49.095 { 00:12:49.095 "params": { 00:12:49.095 "name": "Nvme$subsystem", 00:12:49.096 "trtype": "$TEST_TRANSPORT", 00:12:49.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:49.096 "adrfam": "ipv4", 00:12:49.096 "trsvcid": "$NVMF_PORT", 00:12:49.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:49.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:49.096 "hdgst": ${hdgst:-false}, 00:12:49.096 "ddgst": ${ddgst:-false} 00:12:49.096 }, 00:12:49.096 "method": "bdev_nvme_attach_controller" 00:12:49.096 } 00:12:49.096 EOF 00:12:49.096 )") 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2279265 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:49.096 { 00:12:49.096 "params": { 00:12:49.096 "name": "Nvme$subsystem", 00:12:49.096 "trtype": "$TEST_TRANSPORT", 00:12:49.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:49.096 "adrfam": "ipv4", 00:12:49.096 "trsvcid": "$NVMF_PORT", 00:12:49.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:49.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:49.096 "hdgst": ${hdgst:-false}, 00:12:49.096 "ddgst": ${ddgst:-false} 00:12:49.096 }, 00:12:49.096 "method": "bdev_nvme_attach_controller" 00:12:49.096 } 00:12:49.096 EOF 00:12:49.096 )") 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:49.096 { 00:12:49.096 "params": { 00:12:49.096 "name": "Nvme$subsystem", 00:12:49.096 "trtype": "$TEST_TRANSPORT", 00:12:49.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:49.096 "adrfam": "ipv4", 00:12:49.096 "trsvcid": "$NVMF_PORT", 00:12:49.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:49.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:49.096 "hdgst": ${hdgst:-false}, 00:12:49.096 "ddgst": ${ddgst:-false} 00:12:49.096 }, 00:12:49.096 "method": "bdev_nvme_attach_controller" 00:12:49.096 } 00:12:49.096 EOF 00:12:49.096 )") 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2279258 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:49.096 "params": { 00:12:49.096 "name": "Nvme1", 00:12:49.096 "trtype": "tcp", 00:12:49.096 "traddr": "10.0.0.2", 00:12:49.096 "adrfam": "ipv4", 00:12:49.096 "trsvcid": "4420", 00:12:49.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:49.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:49.096 "hdgst": false, 00:12:49.096 "ddgst": false 00:12:49.096 }, 00:12:49.096 "method": "bdev_nvme_attach_controller" 00:12:49.096 }' 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:49.096 "params": { 00:12:49.096 "name": "Nvme1", 00:12:49.096 "trtype": "tcp", 00:12:49.096 "traddr": "10.0.0.2", 00:12:49.096 "adrfam": "ipv4", 00:12:49.096 "trsvcid": "4420", 00:12:49.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:49.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:49.096 "hdgst": false, 00:12:49.096 "ddgst": false 00:12:49.096 }, 00:12:49.096 "method": "bdev_nvme_attach_controller" 00:12:49.096 }' 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:49.096 "params": { 00:12:49.096 "name": "Nvme1", 00:12:49.096 "trtype": "tcp", 00:12:49.096 "traddr": "10.0.0.2", 00:12:49.096 "adrfam": "ipv4", 00:12:49.096 "trsvcid": "4420", 00:12:49.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:49.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:49.096 "hdgst": false, 00:12:49.096 "ddgst": false 00:12:49.096 }, 00:12:49.096 "method": "bdev_nvme_attach_controller" 00:12:49.096 }' 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:49.096 10:25:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:49.096 "params": { 00:12:49.096 "name": "Nvme1", 00:12:49.096 "trtype": "tcp", 00:12:49.096 "traddr": "10.0.0.2", 00:12:49.096 "adrfam": "ipv4", 00:12:49.096 "trsvcid": "4420", 00:12:49.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:49.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:49.096 "hdgst": false, 00:12:49.096 "ddgst": false 00:12:49.096 }, 00:12:49.096 "method": "bdev_nvme_attach_controller" 00:12:49.096 }' 00:12:49.096 [2024-07-15 10:25:43.532465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:49.096 [2024-07-15 10:25:43.532472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:49.096 [2024-07-15 10:25:43.532465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:49.096 [2024-07-15 10:25:43.532472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:49.096 [2024-07-15 10:25:43.532557] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 10:25:43.532557] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 10:25:43.532558] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 10:25:43.532558] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:49.096 --proc-type=auto ] 00:12:49.096 --proc-type=auto ] 00:12:49.096 --proc-type=auto ] 00:12:49.096 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.096 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.096 [2024-07-15 10:25:43.705901] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.353 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.353 [2024-07-15 10:25:43.802672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:49.353 [2024-07-15 10:25:43.806751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.353 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.353 [2024-07-15 10:25:43.879038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.353 [2024-07-15 10:25:43.908557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:49.353 [2024-07-15 10:25:43.953314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.353 [2024-07-15 10:25:43.973337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:49.618 [2024-07-15 10:25:44.044261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:49.618 Running I/O for 1 seconds... 00:12:49.618 Running I/O for 1 seconds... 00:12:49.618 Running I/O for 1 seconds... 00:12:49.877 Running I/O for 1 seconds... 00:12:50.811 00:12:50.811 Latency(us) 00:12:50.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.811 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:50.811 Nvme1n1 : 1.01 9921.82 38.76 0.00 0.00 12843.35 8107.05 21165.70 00:12:50.811 =================================================================================================================== 00:12:50.811 Total : 9921.82 38.76 0.00 0.00 12843.35 8107.05 21165.70 00:12:50.811 00:12:50.811 Latency(us) 00:12:50.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.812 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:50.812 Nvme1n1 : 1.02 5350.28 20.90 0.00 0.00 23739.63 7233.23 31457.28 00:12:50.812 =================================================================================================================== 00:12:50.812 Total : 5350.28 20.90 0.00 0.00 23739.63 7233.23 31457.28 00:12:50.812 00:12:50.812 Latency(us) 00:12:50.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.812 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:50.812 Nvme1n1 : 1.00 199798.87 780.46 0.00 0.00 637.98 263.96 788.86 00:12:50.812 =================================================================================================================== 00:12:50.812 Total : 199798.87 780.46 0.00 0.00 637.98 263.96 788.86 00:12:50.812 00:12:50.812 Latency(us) 00:12:50.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.812 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:50.812 Nvme1n1 : 1.01 4940.25 19.30 0.00 0.00 25779.53 9223.59 53982.25 00:12:50.812 =================================================================================================================== 00:12:50.812 Total : 4940.25 19.30 0.00 0.00 25779.53 9223.59 53982.25 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2279260 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2279262 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2279265 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:51.071 rmmod nvme_tcp 00:12:51.071 rmmod nvme_fabrics 00:12:51.071 rmmod nvme_keyring 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2279101 ']' 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2279101 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2279101 ']' 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2279101 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2279101 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2279101' 00:12:51.071 killing process with pid 2279101 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2279101 00:12:51.071 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2279101 00:12:51.638 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.638 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:51.638 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:51.638 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.638 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:51.638 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.638 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.638 10:25:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.541 10:25:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:53.541 00:12:53.541 real 0m7.946s 00:12:53.541 user 0m19.922s 00:12:53.541 sys 0m3.443s 00:12:53.541 10:25:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.541 10:25:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.541 ************************************ 00:12:53.541 END TEST nvmf_bdev_io_wait 00:12:53.541 ************************************ 00:12:53.541 10:25:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:53.541 10:25:48 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:53.541 10:25:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:53.541 10:25:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.541 10:25:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:53.541 ************************************ 00:12:53.541 START TEST nvmf_queue_depth 00:12:53.541 ************************************ 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:53.541 * Looking for test storage... 00:12:53.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:53.541 10:25:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:55.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:55.446 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:55.446 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:55.446 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.446 10:25:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.446 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.446 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.446 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:55.446 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.446 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.446 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.446 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:55.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:12:55.447 00:12:55.447 --- 10.0.0.2 ping statistics --- 00:12:55.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.447 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:12:55.447 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:12:55.706 00:12:55.706 --- 10.0.0.1 ping statistics --- 00:12:55.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.706 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2281480 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2281480 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2281480 ']' 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:55.706 10:25:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.706 [2024-07-15 10:25:50.173069] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:55.706 [2024-07-15 10:25:50.173155] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.706 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.706 [2024-07-15 10:25:50.242673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.966 [2024-07-15 10:25:50.357690] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.966 [2024-07-15 10:25:50.357759] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.966 [2024-07-15 10:25:50.357785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.966 [2024-07-15 10:25:50.357799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.966 [2024-07-15 10:25:50.357811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.966 [2024-07-15 10:25:50.357841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:56.534 [2024-07-15 10:25:51.133111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:56.534 Malloc0 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.534 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:56.793 [2024-07-15 10:25:51.192531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2281632 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2281632 /var/tmp/bdevperf.sock 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2281632 ']' 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:56.793 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.794 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:56.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:56.794 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.794 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:56.794 [2024-07-15 10:25:51.238537] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:56.794 [2024-07-15 10:25:51.238601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281632 ] 00:12:56.794 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.794 [2024-07-15 10:25:51.299653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.794 [2024-07-15 10:25:51.415154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.053 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.053 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:57.053 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:57.053 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.053 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:57.311 NVMe0n1 00:12:57.311 10:25:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.311 10:25:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:57.311 Running I/O for 10 seconds... 00:13:07.350 00:13:07.350 Latency(us) 00:13:07.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.350 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:07.350 Verification LBA range: start 0x0 length 0x4000 00:13:07.350 NVMe0n1 : 10.11 8491.94 33.17 0.00 0.00 120067.36 24272.59 74177.04 00:13:07.350 =================================================================================================================== 00:13:07.350 Total : 8491.94 33.17 0.00 0.00 120067.36 24272.59 74177.04 00:13:07.350 0 00:13:07.350 10:26:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2281632 00:13:07.350 10:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2281632 ']' 00:13:07.350 10:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2281632 00:13:07.350 10:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:07.350 10:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.350 10:26:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2281632 00:13:07.608 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:07.608 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:07.608 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2281632' 00:13:07.608 killing process with pid 2281632 00:13:07.608 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2281632 00:13:07.608 Received shutdown signal, test time was about 10.000000 seconds 00:13:07.608 00:13:07.608 Latency(us) 00:13:07.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.608 =================================================================================================================== 00:13:07.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.608 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2281632 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.866 rmmod nvme_tcp 00:13:07.866 rmmod nvme_fabrics 00:13:07.866 rmmod nvme_keyring 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2281480 ']' 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2281480 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2281480 ']' 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2281480 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2281480 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2281480' 00:13:07.866 killing process with pid 2281480 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2281480 00:13:07.866 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2281480 00:13:08.125 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.125 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.125 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.125 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.125 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.125 10:26:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.125 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.125 10:26:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.664 10:26:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:10.664 00:13:10.664 real 0m16.679s 00:13:10.664 user 0m23.627s 00:13:10.664 sys 0m2.928s 00:13:10.664 10:26:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:10.664 10:26:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:10.664 ************************************ 00:13:10.664 END TEST nvmf_queue_depth 00:13:10.664 ************************************ 00:13:10.664 10:26:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:10.664 10:26:04 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:10.664 10:26:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:10.664 10:26:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.664 10:26:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.664 ************************************ 00:13:10.664 START TEST nvmf_target_multipath 00:13:10.664 ************************************ 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:10.664 * Looking for test storage... 00:13:10.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.664 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:10.665 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:10.665 10:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:10.665 10:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:12.568 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.568 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:12.568 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:12.569 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:12.569 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:13:12.569 00:13:12.569 --- 10.0.0.2 ping statistics --- 00:13:12.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.569 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:13:12.569 00:13:12.569 --- 10.0.0.1 ping statistics --- 00:13:12.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.569 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:12.569 only one NIC for nvmf test 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.569 10:26:06 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.569 rmmod nvme_tcp 00:13:12.569 rmmod nvme_fabrics 00:13:12.569 rmmod nvme_keyring 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.569 10:26:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.478 00:13:14.478 real 0m4.296s 00:13:14.478 user 0m0.846s 00:13:14.478 sys 0m1.442s 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:14.478 10:26:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:14.478 ************************************ 00:13:14.478 END TEST nvmf_target_multipath 00:13:14.478 ************************************ 00:13:14.478 10:26:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:14.478 10:26:09 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:14.478 10:26:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:14.478 10:26:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.478 10:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:14.736 ************************************ 00:13:14.736 START TEST nvmf_zcopy 00:13:14.736 ************************************ 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:14.736 * Looking for test storage... 00:13:14.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.736 10:26:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.639 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:16.640 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:16.640 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:16.640 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:16.640 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:16.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:13:16.640 00:13:16.640 --- 10.0.0.2 ping statistics --- 00:13:16.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.640 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:13:16.640 00:13:16.640 --- 10.0.0.1 ping statistics --- 00:13:16.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.640 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2286682 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2286682 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2286682 ']' 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.640 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:16.898 [2024-07-15 10:26:11.298000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:16.898 [2024-07-15 10:26:11.298082] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.898 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.898 [2024-07-15 10:26:11.369438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.898 [2024-07-15 10:26:11.477177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.898 [2024-07-15 10:26:11.477247] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.898 [2024-07-15 10:26:11.477276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.898 [2024-07-15 10:26:11.477288] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.898 [2024-07-15 10:26:11.477297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.898 [2024-07-15 10:26:11.477324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.156 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:17.156 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:17.156 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:17.156 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:17.156 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.156 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.157 [2024-07-15 10:26:11.632048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.157 [2024-07-15 10:26:11.648248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.157 malloc0 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:17.157 { 00:13:17.157 "params": { 00:13:17.157 "name": "Nvme$subsystem", 00:13:17.157 "trtype": "$TEST_TRANSPORT", 00:13:17.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:17.157 "adrfam": "ipv4", 00:13:17.157 "trsvcid": "$NVMF_PORT", 00:13:17.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:17.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:17.157 "hdgst": ${hdgst:-false}, 00:13:17.157 "ddgst": ${ddgst:-false} 00:13:17.157 }, 00:13:17.157 "method": "bdev_nvme_attach_controller" 00:13:17.157 } 00:13:17.157 EOF 00:13:17.157 )") 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:17.157 10:26:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:17.157 "params": { 00:13:17.157 "name": "Nvme1", 00:13:17.157 "trtype": "tcp", 00:13:17.157 "traddr": "10.0.0.2", 00:13:17.157 "adrfam": "ipv4", 00:13:17.157 "trsvcid": "4420", 00:13:17.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:17.157 "hdgst": false, 00:13:17.157 "ddgst": false 00:13:17.157 }, 00:13:17.157 "method": "bdev_nvme_attach_controller" 00:13:17.157 }' 00:13:17.157 [2024-07-15 10:26:11.733164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:17.157 [2024-07-15 10:26:11.733252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286823 ] 00:13:17.157 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.157 [2024-07-15 10:26:11.800154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.414 [2024-07-15 10:26:11.921249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.672 Running I/O for 10 seconds... 00:13:27.710 00:13:27.710 Latency(us) 00:13:27.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.711 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:27.711 Verification LBA range: start 0x0 length 0x1000 00:13:27.711 Nvme1n1 : 10.02 4823.58 37.68 0.00 0.00 26467.57 3883.61 36894.34 00:13:27.711 =================================================================================================================== 00:13:27.711 Total : 4823.58 37.68 0.00 0.00 26467.57 3883.61 36894.34 00:13:27.971 10:26:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2288027 00:13:27.971 10:26:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:27.971 10:26:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.971 10:26:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:27.971 10:26:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:27.971 10:26:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:27.971 10:26:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:27.971 10:26:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:27.971 10:26:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:27.971 { 00:13:27.971 "params": { 00:13:27.971 "name": "Nvme$subsystem", 00:13:27.971 "trtype": "$TEST_TRANSPORT", 00:13:27.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:27.971 "adrfam": "ipv4", 00:13:27.971 "trsvcid": "$NVMF_PORT", 00:13:27.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:27.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:27.971 "hdgst": ${hdgst:-false}, 00:13:27.971 "ddgst": ${ddgst:-false} 00:13:27.971 }, 00:13:27.972 "method": "bdev_nvme_attach_controller" 00:13:27.972 } 00:13:27.972 EOF 00:13:27.972 )") 00:13:27.972 10:26:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:27.972 10:26:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:27.972 [2024-07-15 10:26:22.419439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.419482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 10:26:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:27.972 10:26:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:27.972 "params": { 00:13:27.972 "name": "Nvme1", 00:13:27.972 "trtype": "tcp", 00:13:27.972 "traddr": "10.0.0.2", 00:13:27.972 "adrfam": "ipv4", 00:13:27.972 "trsvcid": "4420", 00:13:27.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:27.972 "hdgst": false, 00:13:27.972 "ddgst": false 00:13:27.972 }, 00:13:27.972 "method": "bdev_nvme_attach_controller" 00:13:27.972 }' 00:13:27.972 [2024-07-15 10:26:22.427387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.427411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.435408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.435431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.443431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.443454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.451449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.451473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.456211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:27.972 [2024-07-15 10:26:22.456304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288027 ] 00:13:27.972 [2024-07-15 10:26:22.459474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.459497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.467494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.467518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.475516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.475539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.483537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.483560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.972 [2024-07-15 10:26:22.491558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.491582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.499583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.499607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.507603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.507627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.515624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.515648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.516617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.972 [2024-07-15 10:26:22.523681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.523719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.531708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.531751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.539689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.539714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.547710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.547734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.555730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.555754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.563752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.563777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.571772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.571796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.579824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.579874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.587884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.587929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.595840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.595889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.603873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.603906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.611902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.611927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.972 [2024-07-15 10:26:22.619944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.972 [2024-07-15 10:26:22.619971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.233 [2024-07-15 10:26:22.627950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.233 [2024-07-15 10:26:22.627976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.233 [2024-07-15 10:26:22.634295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.233 [2024-07-15 10:26:22.635967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.233 [2024-07-15 10:26:22.635992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.643987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.644013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.652058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.652100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.660094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.660141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.668125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.668185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.676145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.676204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.684157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.684218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.692190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.692258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.700175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.700203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.708255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.708298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.716270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.716313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.724246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.724277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.732256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.732281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.740261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.740286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.748302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.748327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.756340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.756365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.764329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.764353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.772352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.772376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.780380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.780406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.788400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.788425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.796425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.796449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.804455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.804482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.812466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.812490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 Running I/O for 5 seconds... 00:13:28.234 [2024-07-15 10:26:22.820493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.820517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.836437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.836471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.849036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.849066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.861235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.861267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.234 [2024-07-15 10:26:22.873105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.234 [2024-07-15 10:26:22.873133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:22.884864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:22.884909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:22.897282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:22.897314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:22.909404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:22.909435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:22.921363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:22.921394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:22.933050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:22.933078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:22.944052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:22.944080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:22.956565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:22.956596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:22.968446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:22.968477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:22.980139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:22.980183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:22.991552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:22.991585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.003208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.003239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.015093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.015121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.027177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.027209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.039050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.039079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.050962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.050990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.063214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.063245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.074502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.074533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.086574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.086605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.098521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.098552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.110198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.110229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.121833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.121865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.496 [2024-07-15 10:26:23.133700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.496 [2024-07-15 10:26:23.133730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.755 [2024-07-15 10:26:23.145534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.755 [2024-07-15 10:26:23.145565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.755 [2024-07-15 10:26:23.157421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.157451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.169097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.169126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.181111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.181139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.192764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.192795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.205070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.205098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.217292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.217323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.229280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.229311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.240841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.240872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.252369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.252400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.264204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.264235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.276466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.276496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.287998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.288026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.300063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.300091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.312189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.312220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.323683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.323714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.335559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.335590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.347611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.347642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.359568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.359598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.371678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.371709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.383603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.383634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.756 [2024-07-15 10:26:23.395419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.756 [2024-07-15 10:26:23.395450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.407672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.407704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.419306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.419338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.431064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.431092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.443066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.443094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.455179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.455210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.466821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.466853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.478962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.478990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.490630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.490661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.502708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.502739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.514831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.514862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.526726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.526757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.538753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.538783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.550866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.550919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.562975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.563003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.575067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.575095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.586970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.586998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.599217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.599249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.610814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.610845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.622654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.622684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.634513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.634544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.646147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.646193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.019 [2024-07-15 10:26:23.658506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.019 [2024-07-15 10:26:23.658536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.670840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.670872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.683067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.683095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.694930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.694958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.706474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.706506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.718183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.718214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.729868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.729908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.741819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.741850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.754000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.754037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.765688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.765719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.777507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.777538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.789324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.789356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.800798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.800831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.812660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.812692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.824653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.824686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.835727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.835754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.846837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.846865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.858347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.858375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.869649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.869677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.277 [2024-07-15 10:26:23.881575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.277 [2024-07-15 10:26:23.881607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.278 [2024-07-15 10:26:23.893704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.278 [2024-07-15 10:26:23.893735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.278 [2024-07-15 10:26:23.905493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.278 [2024-07-15 10:26:23.905524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.278 [2024-07-15 10:26:23.917820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.278 [2024-07-15 10:26:23.917851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:23.929485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:23.929516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:23.941979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:23.942008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:23.953887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:23.953933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:23.966052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:23.966080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:23.978425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:23.978465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:23.990475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:23.990506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.002367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.002398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.014588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.014621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.026488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.026519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.038621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.038652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.050779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.050810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.062764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.062795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.074724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.074755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.087012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.087040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.098901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.098946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.110984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.111012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.123213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.123244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.135382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.135413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.147206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.147237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.159416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.159447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.171689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.171720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.536 [2024-07-15 10:26:24.183002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.536 [2024-07-15 10:26:24.183030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.794 [2024-07-15 10:26:24.195573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.794 [2024-07-15 10:26:24.195604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.794 [2024-07-15 10:26:24.207489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.794 [2024-07-15 10:26:24.207528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.794 [2024-07-15 10:26:24.219122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.794 [2024-07-15 10:26:24.219151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.794 [2024-07-15 10:26:24.231200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.794 [2024-07-15 10:26:24.231231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.794 [2024-07-15 10:26:24.243173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.794 [2024-07-15 10:26:24.243204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.794 [2024-07-15 10:26:24.255063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.794 [2024-07-15 10:26:24.255092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.794 [2024-07-15 10:26:24.267118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.794 [2024-07-15 10:26:24.267146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.794 [2024-07-15 10:26:24.279219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.279251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.290888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.290934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.302969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.302997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.314999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.315027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.327440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.327470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.339230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.339260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.351805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.351836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.363923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.363951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.375856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.375897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.387996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.388024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.399580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.399611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.411940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.411968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.423781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.423813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:29.795 [2024-07-15 10:26:24.436241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:29.795 [2024-07-15 10:26:24.436281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.448190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.448222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.460029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.460057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.471935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.471964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.483704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.483735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.495679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.495710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.507891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.507936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.519348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.519380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.531046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.531074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.542421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.542452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.554030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.554060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.565714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.565745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.577484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.577515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.589073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.589102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.601214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.601247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.613034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.613062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.624771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.054 [2024-07-15 10:26:24.624803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.054 [2024-07-15 10:26:24.636747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.055 [2024-07-15 10:26:24.636778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.055 [2024-07-15 10:26:24.648584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.055 [2024-07-15 10:26:24.648615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.055 [2024-07-15 10:26:24.660477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.055 [2024-07-15 10:26:24.660507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.055 [2024-07-15 10:26:24.672295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.055 [2024-07-15 10:26:24.672327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.055 [2024-07-15 10:26:24.684258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.055 [2024-07-15 10:26:24.684289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.055 [2024-07-15 10:26:24.695648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.055 [2024-07-15 10:26:24.695679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.707822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.707854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.719867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.719906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.732580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.732611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.744189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.744220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.755898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.755943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.767226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.767257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.779017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.779045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.791194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.791225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.803213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.803244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.815284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.815316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.827526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.827557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.839548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.839579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.313 [2024-07-15 10:26:24.851584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.313 [2024-07-15 10:26:24.851615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.314 [2024-07-15 10:26:24.863772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.314 [2024-07-15 10:26:24.863803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.314 [2024-07-15 10:26:24.875820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.314 [2024-07-15 10:26:24.875851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.314 [2024-07-15 10:26:24.887896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.314 [2024-07-15 10:26:24.887942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.314 [2024-07-15 10:26:24.900128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.314 [2024-07-15 10:26:24.900171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.314 [2024-07-15 10:26:24.911773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.314 [2024-07-15 10:26:24.911803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.314 [2024-07-15 10:26:24.923432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.314 [2024-07-15 10:26:24.923463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.314 [2024-07-15 10:26:24.935731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.314 [2024-07-15 10:26:24.935762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.314 [2024-07-15 10:26:24.947612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.314 [2024-07-15 10:26:24.947644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.314 [2024-07-15 10:26:24.959420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.314 [2024-07-15 10:26:24.959451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.573 [2024-07-15 10:26:24.971363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.573 [2024-07-15 10:26:24.971394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.573 [2024-07-15 10:26:24.983476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.573 [2024-07-15 10:26:24.983507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.573 [2024-07-15 10:26:24.995571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.573 [2024-07-15 10:26:24.995603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.573 [2024-07-15 10:26:25.007680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.573 [2024-07-15 10:26:25.007711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.573 [2024-07-15 10:26:25.020358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.020390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.032249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.032281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.044543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.044575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.056798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.056830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.068905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.068958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.079744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.079776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.091900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.091951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.103952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.103980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.115853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.115894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.127205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.127236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.139763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.139794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.151894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.151938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.167336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.167367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.178388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.178418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.191001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.191030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.203199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.203230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.574 [2024-07-15 10:26:25.215263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.574 [2024-07-15 10:26:25.215294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.227220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.227252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.239294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.239326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.251085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.251114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.262737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.262768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.274481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.274512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.286697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.286727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.298838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.298869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.310838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.310868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.322996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.323024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.334942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.334970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.346504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.346535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.358380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.358411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.370144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.370191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.382134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.382178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.394030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.394058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.405987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.406015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.417934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.417961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.429897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.429942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.441820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.441852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.453464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.453495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.465236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.465268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.833 [2024-07-15 10:26:25.476540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:30.833 [2024-07-15 10:26:25.476571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.488144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.488172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.499464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.499495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.511082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.511110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.523105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.523133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.534765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.534796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.546457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.546488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.558293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.558333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.570068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.570096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.581898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.581943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.594541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.594571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.606466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.606502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.618252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.618283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.630513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.630544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.642288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.642319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.654365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.654396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.666185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.666217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.678020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.678048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.689891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.689935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.702036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.702065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.714868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.714910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.727008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.727036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.092 [2024-07-15 10:26:25.739123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.092 [2024-07-15 10:26:25.739151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.751191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.751237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.763144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.763187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.775561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.775592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.787625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.787663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.799178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.799222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.811263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.811295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.823030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.823058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.834727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.834758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.846123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.846151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.858070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.858097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.870008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.870035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.882115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.882142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.894409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.894440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.906205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.906236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.917976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.918004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.929942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.929970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.941962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.941990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.953645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.953675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.965989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.966017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.977853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.977894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.352 [2024-07-15 10:26:25.990060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.352 [2024-07-15 10:26:25.990088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.002006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.002036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.013976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.014015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.025966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.025994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.037970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.037998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.049759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.049789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.061651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.061681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.072583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.072612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.084511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.084543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.096430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.096462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.108129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.108158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.120209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.120240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.132027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.132055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.143744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.143775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.155794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.155825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.168338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.168369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.180404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.180436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.192265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.192294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.204783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.204814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.216628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.216658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.228619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.228649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.240707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.240754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.613 [2024-07-15 10:26:26.252789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.613 [2024-07-15 10:26:26.252820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.873 [2024-07-15 10:26:26.264693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.264725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.276550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.276580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.288196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.288227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.300242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.300273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.312458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.312489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.324577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.324608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.336555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.336585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.348523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.348554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.360369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.360400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.372570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.372601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.384957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.384984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.397112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.397140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.409451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.409482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.421037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.421068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.432763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.432794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.445308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.445336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.457556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.457587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.470158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.470205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.482021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.482049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.494432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.494463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.506842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.506873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:31.874 [2024-07-15 10:26:26.519121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:31.874 [2024-07-15 10:26:26.519149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.531395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.531427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.543448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.543479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.555308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.555339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.567439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.567470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.579289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.579320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.591117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.591145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.603386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.603417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.615618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.615649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.627788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.627819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.639846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.639884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.651732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.651763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.663410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.663441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.675503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.675534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.687448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.687479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.699264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.699296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.711416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.711448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.723390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.723420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.735671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.735702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.747892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.747937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.759841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.759872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.771815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.771845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.135 [2024-07-15 10:26:26.783997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.135 [2024-07-15 10:26:26.784025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.796275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.796308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.808287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.808318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.820409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.820441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.832194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.832225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.844103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.844141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.856035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.856063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.867788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.867819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.879633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.879664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.891919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.891947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.903867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.903906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.915920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.915948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.928446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.928477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.940760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.940790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.952960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.952988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.965192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.965238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.977306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.977338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:26.990029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:26.990057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:27.002108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:27.002136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:27.014033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:27.014061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:27.026330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:27.026361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.394 [2024-07-15 10:26:27.038535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.394 [2024-07-15 10:26:27.038566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.050659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.050691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.062637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.062668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.074439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.074470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.086083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.086111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.097817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.097848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.109601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.109632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.121470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.121501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.133559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.133590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.145609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.145640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.157141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.157169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.168818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.168849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.180968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.180996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.193077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.193105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.205044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.205073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.217246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.217278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.229462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.229493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.241563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.241594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.253743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.253774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.265633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.265664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.277425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.277457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.289498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.289530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.653 [2024-07-15 10:26:27.301489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.653 [2024-07-15 10:26:27.301521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.312489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.312520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.324450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.324481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.336357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.336389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.348462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.348493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.360321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.360352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.372374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.372415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.384090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.384118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.395858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.395897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.407585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.407616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.419874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.419926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.431762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.431793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.443477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.443508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.455329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.455361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.466945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.466973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.478579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.478610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.490443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.490474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.502320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.502351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.515784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.515815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.526601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.526632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.539176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.539207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.550752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.550783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:32.914 [2024-07-15 10:26:27.563024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:32.914 [2024-07-15 10:26:27.563052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.574431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.574463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.585990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.586017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.597866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.597929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.610225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.610256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.625822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.625855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.636836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.636867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.648439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.648470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.660486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.660518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.672069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.672097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.684231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.684262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.696082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.696110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.708051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.708079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.719817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.719847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.731847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.731886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.743994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.744022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.756367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.756398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.768302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.768333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.780309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.780340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.792650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.792682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.804488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.804519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.175 [2024-07-15 10:26:27.816081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.175 [2024-07-15 10:26:27.816109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.827905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.827962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.838496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.838526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.879095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.879122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 00:13:33.435 Latency(us) 00:13:33.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.435 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:33.435 Nvme1n1 : 5.05 10529.91 82.26 0.00 0.00 12043.48 5461.33 51263.72 00:13:33.435 =================================================================================================================== 00:13:33.435 Total : 10529.91 82.26 0.00 0.00 12043.48 5461.33 51263.72 00:13:33.435 [2024-07-15 10:26:27.883517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.883547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.891529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.891559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.899554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.899584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.907615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.907664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.915659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.915715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.923666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.923722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.931688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.931745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.939709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.939765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.947755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.947818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.955761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.955821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.963779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.963834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.971803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.435 [2024-07-15 10:26:27.971860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.435 [2024-07-15 10:26:27.979834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:27.979899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:27.987855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:27.987955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:27.995884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:27.995937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.003909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.003974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.011926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.011987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.019939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.019980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.027936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.027962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.035947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.035972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.043965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.043990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.051962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.051987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.060045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.060097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.068081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.068135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.076058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.076095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.436 [2024-07-15 10:26:28.084060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.436 [2024-07-15 10:26:28.084087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.695 [2024-07-15 10:26:28.092073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.695 [2024-07-15 10:26:28.092100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.695 [2024-07-15 10:26:28.100095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.695 [2024-07-15 10:26:28.100119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.695 [2024-07-15 10:26:28.108121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.695 [2024-07-15 10:26:28.108147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.695 [2024-07-15 10:26:28.116207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.695 [2024-07-15 10:26:28.116262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.695 [2024-07-15 10:26:28.124231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.695 [2024-07-15 10:26:28.124289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.695 [2024-07-15 10:26:28.132213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.695 [2024-07-15 10:26:28.132256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.695 [2024-07-15 10:26:28.140209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.695 [2024-07-15 10:26:28.140233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.695 [2024-07-15 10:26:28.148264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.695 [2024-07-15 10:26:28.148294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.695 [2024-07-15 10:26:28.156279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:33.695 [2024-07-15 10:26:28.156309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:33.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2288027) - No such process 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2288027 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:33.695 delay0 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.695 10:26:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:33.695 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.695 [2024-07-15 10:26:28.327028] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:41.817 Initializing NVMe Controllers 00:13:41.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:41.817 Initialization complete. Launching workers. 00:13:41.817 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 13116 00:13:41.817 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13282, failed to submit 100 00:13:41.817 success 13164, unsuccess 118, failed 0 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:41.817 rmmod nvme_tcp 00:13:41.817 rmmod nvme_fabrics 00:13:41.817 rmmod nvme_keyring 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2286682 ']' 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2286682 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2286682 ']' 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2286682 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2286682 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2286682' 00:13:41.817 killing process with pid 2286682 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2286682 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2286682 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.817 10:26:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.199 10:26:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:43.199 00:13:43.199 real 0m28.693s 00:13:43.199 user 0m41.270s 00:13:43.199 sys 0m9.875s 00:13:43.199 10:26:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.199 10:26:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:43.199 ************************************ 00:13:43.199 END TEST nvmf_zcopy 00:13:43.199 ************************************ 00:13:43.458 10:26:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:43.458 10:26:37 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:43.458 10:26:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.458 10:26:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.458 10:26:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.458 ************************************ 00:13:43.458 START TEST nvmf_nmic 00:13:43.458 ************************************ 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:43.458 * Looking for test storage... 00:13:43.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.458 10:26:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:45.362 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:45.362 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.362 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:45.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:45.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.363 10:26:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:45.363 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:45.363 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.363 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:45.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:13:45.660 00:13:45.660 --- 10.0.0.2 ping statistics --- 00:13:45.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.660 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:13:45.660 00:13:45.660 --- 10.0.0.1 ping statistics --- 00:13:45.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.660 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2291527 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2291527 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2291527 ']' 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.660 10:26:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:45.660 [2024-07-15 10:26:40.218153] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:45.660 [2024-07-15 10:26:40.218240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.660 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.660 [2024-07-15 10:26:40.288227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.971 [2024-07-15 10:26:40.412657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.971 [2024-07-15 10:26:40.412729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.971 [2024-07-15 10:26:40.412746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.971 [2024-07-15 10:26:40.412759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.971 [2024-07-15 10:26:40.412775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.971 [2024-07-15 10:26:40.412867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.971 [2024-07-15 10:26:40.412958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.971 [2024-07-15 10:26:40.412932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.971 [2024-07-15 10:26:40.412961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.537 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.537 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:46.537 10:26:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.537 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:46.537 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.537 10:26:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.537 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.537 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.537 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 [2024-07-15 10:26:41.189042] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 Malloc0 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 [2024-07-15 10:26:41.242446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:46.798 test case1: single bdev can't be used in multiple subsystems 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 [2024-07-15 10:26:41.266300] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:46.798 [2024-07-15 10:26:41.266328] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:46.798 [2024-07-15 10:26:41.266358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.798 request: 00:13:46.798 { 00:13:46.798 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:46.798 "namespace": { 00:13:46.798 "bdev_name": "Malloc0", 00:13:46.798 "no_auto_visible": false 00:13:46.798 }, 00:13:46.798 "method": "nvmf_subsystem_add_ns", 00:13:46.798 "req_id": 1 00:13:46.798 } 00:13:46.798 Got JSON-RPC error response 00:13:46.798 response: 00:13:46.798 { 00:13:46.798 "code": -32602, 00:13:46.798 "message": "Invalid parameters" 00:13:46.798 } 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:46.798 Adding namespace failed - expected result. 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:46.798 test case2: host connect to nvmf target in multiple paths 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 [2024-07-15 10:26:41.274404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.798 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:47.367 10:26:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:47.935 10:26:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:47.935 10:26:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:47.935 10:26:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.935 10:26:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:47.935 10:26:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:49.840 10:26:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:49.840 10:26:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:49.840 10:26:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.098 10:26:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:50.098 10:26:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.098 10:26:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:50.098 10:26:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:50.098 [global] 00:13:50.098 thread=1 00:13:50.098 invalidate=1 00:13:50.098 rw=write 00:13:50.098 time_based=1 00:13:50.098 runtime=1 00:13:50.098 ioengine=libaio 00:13:50.098 direct=1 00:13:50.098 bs=4096 00:13:50.098 iodepth=1 00:13:50.098 norandommap=0 00:13:50.098 numjobs=1 00:13:50.098 00:13:50.098 verify_dump=1 00:13:50.098 verify_backlog=512 00:13:50.098 verify_state_save=0 00:13:50.098 do_verify=1 00:13:50.098 verify=crc32c-intel 00:13:50.098 [job0] 00:13:50.098 filename=/dev/nvme0n1 00:13:50.098 Could not set queue depth (nvme0n1) 00:13:50.098 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.098 fio-3.35 00:13:50.098 Starting 1 thread 00:13:51.476 00:13:51.476 job0: (groupid=0, jobs=1): err= 0: pid=2292179: Mon Jul 15 10:26:45 2024 00:13:51.476 read: IOPS=163, BW=654KiB/s (669kB/s)(672KiB/1028msec) 00:13:51.476 slat (nsec): min=6682, max=46856, avg=14796.55, stdev=6253.20 00:13:51.476 clat (usec): min=301, max=42056, avg=5252.93, stdev=13422.36 00:13:51.476 lat (usec): min=315, max=42092, avg=5267.73, stdev=13424.53 00:13:51.476 clat percentiles (usec): 00:13:51.476 | 1.00th=[ 306], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:13:51.476 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:13:51.476 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[41157], 95.00th=[42206], 00:13:51.476 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:51.476 | 99.99th=[42206] 00:13:51.476 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:13:51.476 slat (nsec): min=8783, max=68436, avg=20215.44, stdev=8568.56 00:13:51.476 clat (usec): min=183, max=450, avg=251.83, stdev=54.02 00:13:51.476 lat (usec): min=193, max=480, avg=272.05, stdev=58.45 00:13:51.476 clat percentiles (usec): 00:13:51.476 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:13:51.476 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:13:51.476 | 70.00th=[ 245], 80.00th=[ 285], 90.00th=[ 347], 95.00th=[ 371], 00:13:51.476 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 449], 99.95th=[ 449], 00:13:51.476 | 99.99th=[ 449] 00:13:51.476 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:51.476 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:51.476 lat (usec) : 250=54.26%, 500=42.79% 00:13:51.476 lat (msec) : 50=2.94% 00:13:51.476 cpu : usr=1.27%, sys=1.27%, ctx=680, majf=0, minf=2 00:13:51.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:51.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.476 issued rwts: total=168,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:51.476 00:13:51.476 Run status group 0 (all jobs): 00:13:51.476 READ: bw=654KiB/s (669kB/s), 654KiB/s-654KiB/s (669kB/s-669kB/s), io=672KiB (688kB), run=1028-1028msec 00:13:51.476 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:13:51.476 00:13:51.476 Disk stats (read/write): 00:13:51.476 nvme0n1: ios=214/512, merge=0/0, ticks=749/124, in_queue=873, util=91.98% 00:13:51.476 10:26:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.476 rmmod nvme_tcp 00:13:51.476 rmmod nvme_fabrics 00:13:51.476 rmmod nvme_keyring 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2291527 ']' 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2291527 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2291527 ']' 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2291527 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:51.476 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2291527 00:13:51.735 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:51.735 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:51.735 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2291527' 00:13:51.735 killing process with pid 2291527 00:13:51.735 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2291527 00:13:51.735 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2291527 00:13:51.995 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:51.995 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:51.995 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:51.995 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.995 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:51.995 10:26:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.995 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.995 10:26:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.904 10:26:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:53.904 00:13:53.904 real 0m10.613s 00:13:53.904 user 0m25.113s 00:13:53.904 sys 0m2.366s 00:13:53.904 10:26:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.904 10:26:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:53.904 ************************************ 00:13:53.904 END TEST nvmf_nmic 00:13:53.904 ************************************ 00:13:53.904 10:26:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:53.904 10:26:48 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:53.904 10:26:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:53.904 10:26:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.904 10:26:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:53.904 ************************************ 00:13:53.904 START TEST nvmf_fio_target 00:13:53.904 ************************************ 00:13:53.904 10:26:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:54.163 * Looking for test storage... 00:13:54.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:54.163 10:26:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:56.069 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:56.069 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:56.069 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:56.069 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.069 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:56.070 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:56.070 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.070 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.070 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.070 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.070 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:56.070 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:56.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:13:56.328 00:13:56.328 --- 10.0.0.2 ping statistics --- 00:13:56.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.328 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:13:56.328 00:13:56.328 --- 10.0.0.1 ping statistics --- 00:13:56.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.328 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2294252 00:13:56.328 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:56.329 10:26:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2294252 00:13:56.329 10:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2294252 ']' 00:13:56.329 10:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.329 10:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.329 10:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.329 10:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.329 10:26:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.329 [2024-07-15 10:26:50.853568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:56.329 [2024-07-15 10:26:50.853647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.329 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.329 [2024-07-15 10:26:50.918872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.587 [2024-07-15 10:26:51.029489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.587 [2024-07-15 10:26:51.029548] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.587 [2024-07-15 10:26:51.029562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.587 [2024-07-15 10:26:51.029573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.587 [2024-07-15 10:26:51.029582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.587 [2024-07-15 10:26:51.029670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.587 [2024-07-15 10:26:51.029735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.587 [2024-07-15 10:26:51.029811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.587 [2024-07-15 10:26:51.029814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.587 10:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.587 10:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:56.587 10:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.587 10:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.587 10:26:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.587 10:26:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.587 10:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:56.845 [2024-07-15 10:26:51.431577] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.845 10:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.103 10:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:57.103 10:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.361 10:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:57.361 10:26:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.619 10:26:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:57.619 10:26:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.877 10:26:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:57.877 10:26:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:58.136 10:26:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:58.394 10:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:58.394 10:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:58.652 10:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:58.652 10:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:58.909 10:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:58.909 10:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:59.167 10:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:59.425 10:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:59.425 10:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:59.682 10:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:59.682 10:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:59.940 10:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.198 [2024-07-15 10:26:54.770341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.198 10:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:00.456 10:26:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:00.716 10:26:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.317 10:26:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:01.317 10:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:01.317 10:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.317 10:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:01.317 10:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:01.317 10:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:03.847 10:26:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:03.847 10:26:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:03.847 10:26:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.847 10:26:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:03.847 10:26:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.847 10:26:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:03.847 10:26:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:03.847 [global] 00:14:03.847 thread=1 00:14:03.847 invalidate=1 00:14:03.847 rw=write 00:14:03.847 time_based=1 00:14:03.847 runtime=1 00:14:03.847 ioengine=libaio 00:14:03.847 direct=1 00:14:03.847 bs=4096 00:14:03.847 iodepth=1 00:14:03.847 norandommap=0 00:14:03.847 numjobs=1 00:14:03.847 00:14:03.847 verify_dump=1 00:14:03.847 verify_backlog=512 00:14:03.847 verify_state_save=0 00:14:03.847 do_verify=1 00:14:03.847 verify=crc32c-intel 00:14:03.847 [job0] 00:14:03.847 filename=/dev/nvme0n1 00:14:03.847 [job1] 00:14:03.847 filename=/dev/nvme0n2 00:14:03.847 [job2] 00:14:03.847 filename=/dev/nvme0n3 00:14:03.847 [job3] 00:14:03.847 filename=/dev/nvme0n4 00:14:03.847 Could not set queue depth (nvme0n1) 00:14:03.847 Could not set queue depth (nvme0n2) 00:14:03.847 Could not set queue depth (nvme0n3) 00:14:03.847 Could not set queue depth (nvme0n4) 00:14:03.847 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.847 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.847 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.847 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:03.847 fio-3.35 00:14:03.847 Starting 4 threads 00:14:04.779 00:14:04.779 job0: (groupid=0, jobs=1): err= 0: pid=2295326: Mon Jul 15 10:26:59 2024 00:14:04.779 read: IOPS=52, BW=212KiB/s (217kB/s)(216KiB/1020msec) 00:14:04.779 slat (nsec): min=7800, max=34886, avg=16619.24, stdev=5554.02 00:14:04.779 clat (usec): min=294, max=41418, avg=15403.16, stdev=19816.69 00:14:04.779 lat (usec): min=307, max=41429, avg=15419.78, stdev=19817.75 00:14:04.779 clat percentiles (usec): 00:14:04.779 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 314], 20.00th=[ 318], 00:14:04.779 | 30.00th=[ 330], 40.00th=[ 351], 50.00th=[ 379], 60.00th=[ 441], 00:14:04.779 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:04.779 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:04.779 | 99.99th=[41157] 00:14:04.779 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:14:04.779 slat (nsec): min=9388, max=84504, avg=26671.22, stdev=13372.12 00:14:04.779 clat (usec): min=181, max=548, avg=326.30, stdev=92.70 00:14:04.780 lat (usec): min=191, max=576, avg=352.97, stdev=98.01 00:14:04.780 clat percentiles (usec): 00:14:04.780 | 1.00th=[ 192], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 235], 00:14:04.780 | 30.00th=[ 260], 40.00th=[ 281], 50.00th=[ 302], 60.00th=[ 351], 00:14:04.780 | 70.00th=[ 383], 80.00th=[ 429], 90.00th=[ 465], 95.00th=[ 482], 00:14:04.780 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 545], 99.95th=[ 545], 00:14:04.780 | 99.99th=[ 545] 00:14:04.780 bw ( KiB/s): min= 4096, max= 4096, per=34.73%, avg=4096.00, stdev= 0.00, samples=1 00:14:04.780 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:04.780 lat (usec) : 250=24.73%, 500=69.79%, 750=1.94% 00:14:04.780 lat (msec) : 50=3.53% 00:14:04.780 cpu : usr=1.08%, sys=1.57%, ctx=567, majf=0, minf=1 00:14:04.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.780 issued rwts: total=54,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.780 job1: (groupid=0, jobs=1): err= 0: pid=2295327: Mon Jul 15 10:26:59 2024 00:14:04.780 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:14:04.780 slat (nsec): min=10016, max=34384, avg=18054.09, stdev=6715.69 00:14:04.780 clat (usec): min=347, max=41444, avg=39138.44, stdev=8665.01 00:14:04.780 lat (usec): min=357, max=41461, avg=39156.50, stdev=8666.77 00:14:04.780 clat percentiles (usec): 00:14:04.780 | 1.00th=[ 347], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:14:04.780 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:04.780 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:04.780 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:04.780 | 99.99th=[41681] 00:14:04.780 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:14:04.780 slat (nsec): min=7115, max=78437, avg=24477.87, stdev=12381.39 00:14:04.780 clat (usec): min=175, max=498, avg=287.02, stdev=69.15 00:14:04.780 lat (usec): min=190, max=512, avg=311.50, stdev=66.56 00:14:04.780 clat percentiles (usec): 00:14:04.780 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 223], 00:14:04.780 | 30.00th=[ 235], 40.00th=[ 249], 50.00th=[ 273], 60.00th=[ 302], 00:14:04.780 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 379], 95.00th=[ 412], 00:14:04.780 | 99.00th=[ 445], 99.50th=[ 465], 99.90th=[ 498], 99.95th=[ 498], 00:14:04.780 | 99.99th=[ 498] 00:14:04.780 bw ( KiB/s): min= 4096, max= 4096, per=34.73%, avg=4096.00, stdev= 0.00, samples=1 00:14:04.780 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:04.780 lat (usec) : 250=39.14%, 500=56.93% 00:14:04.780 lat (msec) : 50=3.93% 00:14:04.780 cpu : usr=0.19%, sys=1.56%, ctx=535, majf=0, minf=1 00:14:04.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.780 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.780 job2: (groupid=0, jobs=1): err= 0: pid=2295328: Mon Jul 15 10:26:59 2024 00:14:04.780 read: IOPS=1305, BW=5221KiB/s (5346kB/s)(5440KiB/1042msec) 00:14:04.780 slat (nsec): min=4510, max=67415, avg=13698.75, stdev=10210.38 00:14:04.780 clat (usec): min=243, max=41501, avg=499.58, stdev=2694.78 00:14:04.780 lat (usec): min=248, max=41518, avg=513.28, stdev=2695.10 00:14:04.780 clat percentiles (usec): 00:14:04.780 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 277], 00:14:04.780 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:14:04.780 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 379], 95.00th=[ 404], 00:14:04.780 | 99.00th=[ 486], 99.50th=[ 537], 99.90th=[41157], 99.95th=[41681], 00:14:04.780 | 99.99th=[41681] 00:14:04.780 write: IOPS=1474, BW=5896KiB/s (6038kB/s)(6144KiB/1042msec); 0 zone resets 00:14:04.780 slat (nsec): min=6212, max=69805, avg=11790.78, stdev=7331.25 00:14:04.780 clat (usec): min=163, max=800, avg=204.63, stdev=36.89 00:14:04.780 lat (usec): min=169, max=834, avg=216.42, stdev=40.90 00:14:04.780 clat percentiles (usec): 00:14:04.780 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:14:04.780 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:14:04.780 | 70.00th=[ 208], 80.00th=[ 225], 90.00th=[ 251], 95.00th=[ 273], 00:14:04.780 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 355], 99.95th=[ 799], 00:14:04.780 | 99.99th=[ 799] 00:14:04.780 bw ( KiB/s): min= 4096, max= 8192, per=52.10%, avg=6144.00, stdev=2896.31, samples=2 00:14:04.780 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:14:04.780 lat (usec) : 250=48.45%, 500=51.10%, 750=0.21%, 1000=0.03% 00:14:04.780 lat (msec) : 50=0.21% 00:14:04.780 cpu : usr=2.02%, sys=3.55%, ctx=2896, majf=0, minf=2 00:14:04.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.780 issued rwts: total=1360,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.780 job3: (groupid=0, jobs=1): err= 0: pid=2295329: Mon Jul 15 10:26:59 2024 00:14:04.780 read: IOPS=20, BW=80.8KiB/s (82.8kB/s)(84.0KiB/1039msec) 00:14:04.780 slat (nsec): min=14142, max=38454, avg=20382.19, stdev=7359.62 00:14:04.780 clat (usec): min=40480, max=41567, avg=40992.61, stdev=198.52 00:14:04.780 lat (usec): min=40497, max=41586, avg=41012.99, stdev=197.60 00:14:04.780 clat percentiles (usec): 00:14:04.780 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:14:04.780 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:04.780 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:04.780 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:04.780 | 99.99th=[41681] 00:14:04.780 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:14:04.780 slat (nsec): min=8145, max=79861, avg=26262.90, stdev=12784.13 00:14:04.780 clat (usec): min=195, max=1295, avg=308.78, stdev=84.15 00:14:04.780 lat (usec): min=214, max=1327, avg=335.04, stdev=86.22 00:14:04.780 clat percentiles (usec): 00:14:04.780 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 245], 00:14:04.780 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 306], 00:14:04.780 | 70.00th=[ 334], 80.00th=[ 375], 90.00th=[ 420], 95.00th=[ 457], 00:14:04.780 | 99.00th=[ 490], 99.50th=[ 510], 99.90th=[ 1303], 99.95th=[ 1303], 00:14:04.780 | 99.99th=[ 1303] 00:14:04.780 bw ( KiB/s): min= 4096, max= 4096, per=34.73%, avg=4096.00, stdev= 0.00, samples=1 00:14:04.780 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:04.780 lat (usec) : 250=24.58%, 500=70.73%, 750=0.56% 00:14:04.780 lat (msec) : 2=0.19%, 50=3.94% 00:14:04.780 cpu : usr=0.96%, sys=1.35%, ctx=535, majf=0, minf=1 00:14:04.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.780 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.780 00:14:04.780 Run status group 0 (all jobs): 00:14:04.780 READ: bw=5593KiB/s (5727kB/s), 80.8KiB/s-5221KiB/s (82.8kB/s-5346kB/s), io=5828KiB (5968kB), run=1020-1042msec 00:14:04.780 WRITE: bw=11.5MiB/s (12.1MB/s), 1971KiB/s-5896KiB/s (2018kB/s-6038kB/s), io=12.0MiB (12.6MB), run=1020-1042msec 00:14:04.780 00:14:04.780 Disk stats (read/write): 00:14:04.780 nvme0n1: ios=98/512, merge=0/0, ticks=1108/141, in_queue=1249, util=85.37% 00:14:04.780 nvme0n2: ios=40/512, merge=0/0, ticks=1538/125, in_queue=1663, util=89.52% 00:14:04.780 nvme0n3: ios=1406/1536, merge=0/0, ticks=538/299, in_queue=837, util=95.09% 00:14:04.780 nvme0n4: ios=73/512, merge=0/0, ticks=799/135, in_queue=934, util=94.11% 00:14:04.780 10:26:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:04.780 [global] 00:14:04.780 thread=1 00:14:04.780 invalidate=1 00:14:04.781 rw=randwrite 00:14:04.781 time_based=1 00:14:04.781 runtime=1 00:14:04.781 ioengine=libaio 00:14:04.781 direct=1 00:14:04.781 bs=4096 00:14:04.781 iodepth=1 00:14:04.781 norandommap=0 00:14:04.781 numjobs=1 00:14:04.781 00:14:04.781 verify_dump=1 00:14:04.781 verify_backlog=512 00:14:04.781 verify_state_save=0 00:14:04.781 do_verify=1 00:14:04.781 verify=crc32c-intel 00:14:04.781 [job0] 00:14:04.781 filename=/dev/nvme0n1 00:14:04.781 [job1] 00:14:04.781 filename=/dev/nvme0n2 00:14:04.781 [job2] 00:14:04.781 filename=/dev/nvme0n3 00:14:04.781 [job3] 00:14:04.781 filename=/dev/nvme0n4 00:14:05.040 Could not set queue depth (nvme0n1) 00:14:05.040 Could not set queue depth (nvme0n2) 00:14:05.040 Could not set queue depth (nvme0n3) 00:14:05.040 Could not set queue depth (nvme0n4) 00:14:05.040 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.040 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.040 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.040 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:05.040 fio-3.35 00:14:05.040 Starting 4 threads 00:14:06.418 00:14:06.418 job0: (groupid=0, jobs=1): err= 0: pid=2295555: Mon Jul 15 10:27:00 2024 00:14:06.418 read: IOPS=508, BW=2033KiB/s (2082kB/s)(2092KiB/1029msec) 00:14:06.418 slat (nsec): min=5235, max=34581, avg=8102.38, stdev=4213.96 00:14:06.418 clat (usec): min=242, max=43947, avg=1425.93, stdev=6738.31 00:14:06.418 lat (usec): min=248, max=43963, avg=1434.03, stdev=6740.34 00:14:06.418 clat percentiles (usec): 00:14:06.418 | 1.00th=[ 245], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 258], 00:14:06.418 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 302], 00:14:06.418 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 359], 95.00th=[ 486], 00:14:06.418 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:14:06.418 | 99.99th=[43779] 00:14:06.418 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:14:06.418 slat (nsec): min=6996, max=79042, avg=15203.80, stdev=9632.92 00:14:06.418 clat (usec): min=174, max=858, avg=251.51, stdev=64.96 00:14:06.418 lat (usec): min=182, max=874, avg=266.71, stdev=69.19 00:14:06.418 clat percentiles (usec): 00:14:06.418 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:14:06.418 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 249], 00:14:06.418 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 355], 00:14:06.418 | 99.00th=[ 449], 99.50th=[ 676], 99.90th=[ 857], 99.95th=[ 857], 00:14:06.418 | 99.99th=[ 857] 00:14:06.418 bw ( KiB/s): min= 2224, max= 5968, per=41.48%, avg=4096.00, stdev=2647.41, samples=2 00:14:06.418 iops : min= 556, max= 1492, avg=1024.00, stdev=661.85, samples=2 00:14:06.418 lat (usec) : 250=42.40%, 500=55.53%, 750=0.78%, 1000=0.32% 00:14:06.418 lat (msec) : 10=0.06%, 50=0.90% 00:14:06.418 cpu : usr=1.17%, sys=2.63%, ctx=1547, majf=0, minf=1 00:14:06.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.418 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.418 job1: (groupid=0, jobs=1): err= 0: pid=2295556: Mon Jul 15 10:27:00 2024 00:14:06.418 read: IOPS=45, BW=181KiB/s (186kB/s)(188KiB/1037msec) 00:14:06.418 slat (nsec): min=6690, max=33876, avg=14586.36, stdev=5617.37 00:14:06.418 clat (usec): min=367, max=41367, avg=19427.09, stdev=20458.18 00:14:06.418 lat (usec): min=384, max=41383, avg=19441.67, stdev=20460.57 00:14:06.418 clat percentiles (usec): 00:14:06.418 | 1.00th=[ 367], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 420], 00:14:06.418 | 30.00th=[ 453], 40.00th=[ 469], 50.00th=[ 515], 60.00th=[41157], 00:14:06.418 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:06.418 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:06.418 | 99.99th=[41157] 00:14:06.418 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:14:06.418 slat (nsec): min=7311, max=62005, avg=16753.66, stdev=7832.19 00:14:06.418 clat (usec): min=170, max=510, avg=218.80, stdev=42.02 00:14:06.418 lat (usec): min=180, max=523, avg=235.56, stdev=44.02 00:14:06.418 clat percentiles (usec): 00:14:06.418 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:14:06.418 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:14:06.418 | 70.00th=[ 219], 80.00th=[ 235], 90.00th=[ 281], 95.00th=[ 318], 00:14:06.418 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 510], 99.95th=[ 510], 00:14:06.418 | 99.99th=[ 510] 00:14:06.418 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:14:06.418 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:06.418 lat (usec) : 250=78.18%, 500=17.35%, 750=0.54% 00:14:06.418 lat (msec) : 50=3.94% 00:14:06.418 cpu : usr=0.39%, sys=0.97%, ctx=561, majf=0, minf=2 00:14:06.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.418 issued rwts: total=47,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.418 job2: (groupid=0, jobs=1): err= 0: pid=2295557: Mon Jul 15 10:27:00 2024 00:14:06.418 read: IOPS=333, BW=1333KiB/s (1365kB/s)(1344KiB/1008msec) 00:14:06.418 slat (nsec): min=6116, max=64203, avg=21573.92, stdev=11340.84 00:14:06.418 clat (usec): min=286, max=41077, avg=2541.44, stdev=9159.01 00:14:06.418 lat (usec): min=298, max=41097, avg=2563.01, stdev=9157.90 00:14:06.418 clat percentiles (usec): 00:14:06.418 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 306], 00:14:06.418 | 30.00th=[ 314], 40.00th=[ 330], 50.00th=[ 351], 60.00th=[ 367], 00:14:06.418 | 70.00th=[ 379], 80.00th=[ 404], 90.00th=[ 494], 95.00th=[41157], 00:14:06.418 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:06.418 | 99.99th=[41157] 00:14:06.418 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:14:06.418 slat (nsec): min=8188, max=64418, avg=21212.22, stdev=10247.05 00:14:06.418 clat (usec): min=194, max=412, avg=255.15, stdev=47.75 00:14:06.418 lat (usec): min=212, max=430, avg=276.36, stdev=45.72 00:14:06.418 clat percentiles (usec): 00:14:06.418 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:14:06.418 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 237], 60.00th=[ 251], 00:14:06.418 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[ 347], 00:14:06.418 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 412], 99.95th=[ 412], 00:14:06.418 | 99.99th=[ 412] 00:14:06.418 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:14:06.418 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:06.418 lat (usec) : 250=36.20%, 500=60.02%, 750=1.18%, 1000=0.35% 00:14:06.418 lat (msec) : 2=0.12%, 50=2.12% 00:14:06.418 cpu : usr=1.09%, sys=1.69%, ctx=849, majf=0, minf=1 00:14:06.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.418 issued rwts: total=336,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.418 job3: (groupid=0, jobs=1): err= 0: pid=2295558: Mon Jul 15 10:27:00 2024 00:14:06.418 read: IOPS=97, BW=388KiB/s (397kB/s)(392KiB/1010msec) 00:14:06.418 slat (nsec): min=4629, max=36997, avg=16101.16, stdev=8893.82 00:14:06.418 clat (usec): min=313, max=41395, avg=8672.76, stdev=16474.94 00:14:06.418 lat (usec): min=320, max=41408, avg=8688.86, stdev=16474.76 00:14:06.418 clat percentiles (usec): 00:14:06.418 | 1.00th=[ 314], 5.00th=[ 318], 10.00th=[ 318], 20.00th=[ 355], 00:14:06.418 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 379], 60.00th=[ 388], 00:14:06.418 | 70.00th=[ 408], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:14:06.419 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:06.419 | 99.99th=[41157] 00:14:06.419 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:14:06.419 slat (nsec): min=7337, max=63035, avg=21921.34, stdev=10978.75 00:14:06.419 clat (usec): min=193, max=637, avg=280.76, stdev=71.36 00:14:06.419 lat (usec): min=210, max=664, avg=302.68, stdev=74.72 00:14:06.419 clat percentiles (usec): 00:14:06.419 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:14:06.419 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 273], 00:14:06.419 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 396], 95.00th=[ 408], 00:14:06.419 | 99.00th=[ 457], 99.50th=[ 498], 99.90th=[ 635], 99.95th=[ 635], 00:14:06.419 | 99.99th=[ 635] 00:14:06.419 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:14:06.419 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:06.419 lat (usec) : 250=42.95%, 500=53.28%, 750=0.49% 00:14:06.419 lat (msec) : 50=3.28% 00:14:06.419 cpu : usr=0.59%, sys=1.29%, ctx=612, majf=0, minf=1 00:14:06.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.419 issued rwts: total=98,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:06.419 00:14:06.419 Run status group 0 (all jobs): 00:14:06.419 READ: bw=3873KiB/s (3966kB/s), 181KiB/s-2033KiB/s (186kB/s-2082kB/s), io=4016KiB (4112kB), run=1008-1037msec 00:14:06.419 WRITE: bw=9875KiB/s (10.1MB/s), 1975KiB/s-3981KiB/s (2022kB/s-4076kB/s), io=10.0MiB (10.5MB), run=1008-1037msec 00:14:06.419 00:14:06.419 Disk stats (read/write): 00:14:06.419 nvme0n1: ios=568/1024, merge=0/0, ticks=561/240, in_queue=801, util=87.27% 00:14:06.419 nvme0n2: ios=65/512, merge=0/0, ticks=1625/108, in_queue=1733, util=94.11% 00:14:06.419 nvme0n3: ios=371/512, merge=0/0, ticks=1077/116, in_queue=1193, util=96.88% 00:14:06.419 nvme0n4: ios=147/512, merge=0/0, ticks=942/143, in_queue=1085, util=98.32% 00:14:06.419 10:27:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:06.419 [global] 00:14:06.419 thread=1 00:14:06.419 invalidate=1 00:14:06.419 rw=write 00:14:06.419 time_based=1 00:14:06.419 runtime=1 00:14:06.419 ioengine=libaio 00:14:06.419 direct=1 00:14:06.419 bs=4096 00:14:06.419 iodepth=128 00:14:06.419 norandommap=0 00:14:06.419 numjobs=1 00:14:06.419 00:14:06.419 verify_dump=1 00:14:06.419 verify_backlog=512 00:14:06.419 verify_state_save=0 00:14:06.419 do_verify=1 00:14:06.419 verify=crc32c-intel 00:14:06.419 [job0] 00:14:06.419 filename=/dev/nvme0n1 00:14:06.419 [job1] 00:14:06.419 filename=/dev/nvme0n2 00:14:06.419 [job2] 00:14:06.419 filename=/dev/nvme0n3 00:14:06.419 [job3] 00:14:06.419 filename=/dev/nvme0n4 00:14:06.419 Could not set queue depth (nvme0n1) 00:14:06.419 Could not set queue depth (nvme0n2) 00:14:06.419 Could not set queue depth (nvme0n3) 00:14:06.419 Could not set queue depth (nvme0n4) 00:14:06.676 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:06.676 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:06.676 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:06.676 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:06.676 fio-3.35 00:14:06.676 Starting 4 threads 00:14:08.049 00:14:08.049 job0: (groupid=0, jobs=1): err= 0: pid=2295856: Mon Jul 15 10:27:02 2024 00:14:08.049 read: IOPS=5136, BW=20.1MiB/s (21.0MB/s)(20.1MiB/1004msec) 00:14:08.049 slat (usec): min=3, max=14890, avg=93.19, stdev=547.12 00:14:08.049 clat (usec): min=1481, max=42292, avg=12325.44, stdev=4626.93 00:14:08.049 lat (usec): min=2078, max=42297, avg=12418.63, stdev=4653.03 00:14:08.049 clat percentiles (usec): 00:14:08.049 | 1.00th=[ 3982], 5.00th=[ 8160], 10.00th=[ 9372], 20.00th=[10290], 00:14:08.049 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:14:08.049 | 70.00th=[12387], 80.00th=[13435], 90.00th=[15139], 95.00th=[20579], 00:14:08.049 | 99.00th=[36439], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:14:08.049 | 99.99th=[42206] 00:14:08.049 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:14:08.049 slat (usec): min=3, max=19857, avg=81.01, stdev=485.14 00:14:08.049 clat (usec): min=1440, max=32008, avg=11320.70, stdev=3468.69 00:14:08.049 lat (usec): min=1453, max=32042, avg=11401.71, stdev=3488.88 00:14:08.049 clat percentiles (usec): 00:14:08.049 | 1.00th=[ 3720], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9372], 00:14:08.049 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:14:08.050 | 70.00th=[11731], 80.00th=[13042], 90.00th=[14222], 95.00th=[15008], 00:14:08.050 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[31589], 00:14:08.050 | 99.99th=[32113] 00:14:08.050 bw ( KiB/s): min=21320, max=23016, per=32.84%, avg=22168.00, stdev=1199.25, samples=2 00:14:08.050 iops : min= 5330, max= 5754, avg=5542.00, stdev=299.81, samples=2 00:14:08.050 lat (msec) : 2=0.10%, 4=1.09%, 10=19.80%, 20=75.32%, 50=3.69% 00:14:08.050 cpu : usr=7.38%, sys=10.07%, ctx=551, majf=0, minf=9 00:14:08.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:08.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:08.050 issued rwts: total=5157,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:08.050 job1: (groupid=0, jobs=1): err= 0: pid=2295857: Mon Jul 15 10:27:02 2024 00:14:08.050 read: IOPS=5338, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1007msec) 00:14:08.050 slat (usec): min=2, max=14538, avg=91.01, stdev=566.45 00:14:08.050 clat (usec): min=475, max=42921, avg=11809.53, stdev=5111.50 00:14:08.050 lat (usec): min=6555, max=42941, avg=11900.54, stdev=5145.04 00:14:08.050 clat percentiles (usec): 00:14:08.050 | 1.00th=[ 7570], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10159], 00:14:08.050 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:14:08.050 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12387], 95.00th=[14222], 00:14:08.050 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:14:08.050 | 99.99th=[42730] 00:14:08.050 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:14:08.050 slat (usec): min=3, max=25772, avg=81.66, stdev=562.59 00:14:08.050 clat (usec): min=908, max=38980, avg=11277.68, stdev=4325.32 00:14:08.050 lat (usec): min=5391, max=39481, avg=11359.34, stdev=4343.88 00:14:08.050 clat percentiles (usec): 00:14:08.050 | 1.00th=[ 5473], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[ 9896], 00:14:08.050 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:14:08.050 | 70.00th=[10945], 80.00th=[11207], 90.00th=[12387], 95.00th=[15270], 00:14:08.050 | 99.00th=[37487], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:14:08.050 | 99.99th=[39060] 00:14:08.050 bw ( KiB/s): min=21232, max=23824, per=33.37%, avg=22528.00, stdev=1832.82, samples=2 00:14:08.050 iops : min= 5308, max= 5956, avg=5632.00, stdev=458.21, samples=2 00:14:08.050 lat (usec) : 500=0.01%, 1000=0.01% 00:14:08.050 lat (msec) : 10=21.81%, 20=74.71%, 50=3.46% 00:14:08.050 cpu : usr=7.16%, sys=10.34%, ctx=430, majf=0, minf=15 00:14:08.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:08.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:08.050 issued rwts: total=5376,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:08.050 job2: (groupid=0, jobs=1): err= 0: pid=2295864: Mon Jul 15 10:27:02 2024 00:14:08.050 read: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1007msec) 00:14:08.050 slat (usec): min=3, max=12572, avg=147.84, stdev=844.94 00:14:08.050 clat (usec): min=3396, max=44305, avg=18659.37, stdev=5191.49 00:14:08.050 lat (usec): min=7515, max=44320, avg=18807.21, stdev=5236.63 00:14:08.050 clat percentiles (usec): 00:14:08.050 | 1.00th=[ 8225], 5.00th=[13042], 10.00th=[13960], 20.00th=[14877], 00:14:08.050 | 30.00th=[16057], 40.00th=[17171], 50.00th=[17695], 60.00th=[18220], 00:14:08.050 | 70.00th=[19530], 80.00th=[21365], 90.00th=[23725], 95.00th=[28967], 00:14:08.050 | 99.00th=[38536], 99.50th=[38536], 99.90th=[41157], 99.95th=[41681], 00:14:08.050 | 99.99th=[44303] 00:14:08.050 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:14:08.050 slat (usec): min=4, max=26105, avg=164.87, stdev=1123.18 00:14:08.050 clat (usec): min=2546, max=77689, avg=23157.87, stdev=10374.69 00:14:08.050 lat (usec): min=2592, max=77709, avg=23322.74, stdev=10482.04 00:14:08.050 clat percentiles (usec): 00:14:08.050 | 1.00th=[ 7701], 5.00th=[11731], 10.00th=[13173], 20.00th=[15139], 00:14:08.050 | 30.00th=[15533], 40.00th=[16581], 50.00th=[20055], 60.00th=[24249], 00:14:08.050 | 70.00th=[25822], 80.00th=[32113], 90.00th=[35914], 95.00th=[45876], 00:14:08.050 | 99.00th=[52691], 99.50th=[52691], 99.90th=[58459], 99.95th=[70779], 00:14:08.050 | 99.99th=[78119] 00:14:08.050 bw ( KiB/s): min=12208, max=12368, per=18.20%, avg=12288.00, stdev=113.14, samples=2 00:14:08.050 iops : min= 3052, max= 3092, avg=3072.00, stdev=28.28, samples=2 00:14:08.050 lat (msec) : 4=0.03%, 10=1.57%, 20=60.42%, 50=35.87%, 100=2.11% 00:14:08.050 cpu : usr=4.08%, sys=7.46%, ctx=299, majf=0, minf=17 00:14:08.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:08.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:08.050 issued rwts: total=2997,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:08.050 job3: (groupid=0, jobs=1): err= 0: pid=2295865: Mon Jul 15 10:27:02 2024 00:14:08.050 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:14:08.050 slat (usec): min=2, max=22484, avg=218.06, stdev=1484.98 00:14:08.050 clat (usec): min=9614, max=73309, avg=27351.57, stdev=16256.16 00:14:08.050 lat (usec): min=9632, max=73317, avg=27569.63, stdev=16354.12 00:14:08.050 clat percentiles (usec): 00:14:08.050 | 1.00th=[10552], 5.00th=[11338], 10.00th=[12387], 20.00th=[13566], 00:14:08.050 | 30.00th=[15008], 40.00th=[19530], 50.00th=[21627], 60.00th=[24773], 00:14:08.050 | 70.00th=[28705], 80.00th=[40633], 90.00th=[56361], 95.00th=[61604], 00:14:08.050 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:14:08.050 | 99.99th=[72877] 00:14:08.050 write: IOPS=2647, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1004msec); 0 zone resets 00:14:08.050 slat (usec): min=3, max=18293, avg=155.49, stdev=1035.96 00:14:08.050 clat (usec): min=3262, max=69963, avg=20951.54, stdev=10366.50 00:14:08.050 lat (usec): min=3690, max=70003, avg=21107.03, stdev=10400.67 00:14:08.050 clat percentiles (usec): 00:14:08.050 | 1.00th=[ 4178], 5.00th=[ 7570], 10.00th=[11863], 20.00th=[13173], 00:14:08.050 | 30.00th=[14877], 40.00th=[17695], 50.00th=[17957], 60.00th=[19268], 00:14:08.050 | 70.00th=[23200], 80.00th=[30016], 90.00th=[34341], 95.00th=[38011], 00:14:08.050 | 99.00th=[63701], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:14:08.050 | 99.99th=[69731] 00:14:08.050 bw ( KiB/s): min= 8192, max=12288, per=15.17%, avg=10240.00, stdev=2896.31, samples=2 00:14:08.050 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:14:08.050 lat (msec) : 4=0.19%, 10=2.72%, 20=51.76%, 50=36.93%, 100=8.39% 00:14:08.050 cpu : usr=2.89%, sys=5.58%, ctx=191, majf=0, minf=9 00:14:08.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:08.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:08.050 issued rwts: total=2560,2658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:08.050 00:14:08.050 Run status group 0 (all jobs): 00:14:08.050 READ: bw=62.4MiB/s (65.4MB/s), 9.96MiB/s-20.9MiB/s (10.4MB/s-21.9MB/s), io=62.9MiB (65.9MB), run=1004-1007msec 00:14:08.050 WRITE: bw=65.9MiB/s (69.1MB/s), 10.3MiB/s-21.9MiB/s (10.8MB/s-23.0MB/s), io=66.4MiB (69.6MB), run=1004-1007msec 00:14:08.050 00:14:08.050 Disk stats (read/write): 00:14:08.050 nvme0n1: ios=4473/4608, merge=0/0, ticks=25317/25998, in_queue=51315, util=98.70% 00:14:08.050 nvme0n2: ios=4699/5120, merge=0/0, ticks=25459/25426, in_queue=50885, util=97.97% 00:14:08.050 nvme0n3: ios=2607/2639, merge=0/0, ticks=24836/28416, in_queue=53252, util=96.45% 00:14:08.050 nvme0n4: ios=1906/2048, merge=0/0, ticks=23039/20100, in_queue=43139, util=96.63% 00:14:08.050 10:27:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:08.050 [global] 00:14:08.050 thread=1 00:14:08.050 invalidate=1 00:14:08.050 rw=randwrite 00:14:08.050 time_based=1 00:14:08.050 runtime=1 00:14:08.050 ioengine=libaio 00:14:08.050 direct=1 00:14:08.050 bs=4096 00:14:08.050 iodepth=128 00:14:08.050 norandommap=0 00:14:08.050 numjobs=1 00:14:08.050 00:14:08.050 verify_dump=1 00:14:08.050 verify_backlog=512 00:14:08.050 verify_state_save=0 00:14:08.050 do_verify=1 00:14:08.050 verify=crc32c-intel 00:14:08.050 [job0] 00:14:08.050 filename=/dev/nvme0n1 00:14:08.050 [job1] 00:14:08.050 filename=/dev/nvme0n2 00:14:08.050 [job2] 00:14:08.050 filename=/dev/nvme0n3 00:14:08.050 [job3] 00:14:08.050 filename=/dev/nvme0n4 00:14:08.050 Could not set queue depth (nvme0n1) 00:14:08.050 Could not set queue depth (nvme0n2) 00:14:08.050 Could not set queue depth (nvme0n3) 00:14:08.050 Could not set queue depth (nvme0n4) 00:14:08.050 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.050 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.050 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.050 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:08.050 fio-3.35 00:14:08.050 Starting 4 threads 00:14:09.424 00:14:09.424 job0: (groupid=0, jobs=1): err= 0: pid=2296224: Mon Jul 15 10:27:03 2024 00:14:09.424 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:14:09.424 slat (usec): min=2, max=7795, avg=115.06, stdev=615.32 00:14:09.424 clat (usec): min=9518, max=24860, avg=14306.87, stdev=2374.51 00:14:09.424 lat (usec): min=9526, max=28822, avg=14421.92, stdev=2432.15 00:14:09.424 clat percentiles (usec): 00:14:09.424 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11994], 20.00th=[12518], 00:14:09.424 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:14:09.424 | 70.00th=[14746], 80.00th=[16319], 90.00th=[17433], 95.00th=[19006], 00:14:09.424 | 99.00th=[21890], 99.50th=[24249], 99.90th=[24773], 99.95th=[24773], 00:14:09.424 | 99.99th=[24773] 00:14:09.424 write: IOPS=3811, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1005msec); 0 zone resets 00:14:09.424 slat (usec): min=3, max=7174, avg=144.56, stdev=613.11 00:14:09.424 clat (usec): min=4303, max=32617, avg=19816.27, stdev=7308.61 00:14:09.424 lat (usec): min=6641, max=32625, avg=19960.83, stdev=7357.09 00:14:09.424 clat percentiles (usec): 00:14:09.424 | 1.00th=[ 8029], 5.00th=[11207], 10.00th=[11731], 20.00th=[12911], 00:14:09.424 | 30.00th=[13698], 40.00th=[14484], 50.00th=[19006], 60.00th=[21890], 00:14:09.424 | 70.00th=[24249], 80.00th=[28181], 90.00th=[31327], 95.00th=[31851], 00:14:09.424 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32637], 99.95th=[32637], 00:14:09.424 | 99.99th=[32637] 00:14:09.424 bw ( KiB/s): min=13512, max=16120, per=22.88%, avg=14816.00, stdev=1844.13, samples=2 00:14:09.424 iops : min= 3378, max= 4030, avg=3704.00, stdev=461.03, samples=2 00:14:09.424 lat (msec) : 10=1.39%, 20=72.45%, 50=26.16% 00:14:09.424 cpu : usr=4.38%, sys=6.37%, ctx=422, majf=0, minf=11 00:14:09.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:09.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:09.424 issued rwts: total=3584,3831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:09.424 job1: (groupid=0, jobs=1): err= 0: pid=2296244: Mon Jul 15 10:27:03 2024 00:14:09.424 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:14:09.424 slat (usec): min=2, max=19051, avg=259.56, stdev=1337.20 00:14:09.424 clat (usec): min=8544, max=63241, avg=33577.44, stdev=14540.10 00:14:09.424 lat (usec): min=8548, max=63247, avg=33837.00, stdev=14585.01 00:14:09.424 clat percentiles (usec): 00:14:09.424 | 1.00th=[ 9110], 5.00th=[11600], 10.00th=[16712], 20.00th=[20055], 00:14:09.424 | 30.00th=[21890], 40.00th=[25560], 50.00th=[32375], 60.00th=[39584], 00:14:09.424 | 70.00th=[44303], 80.00th=[47449], 90.00th=[54264], 95.00th=[59507], 00:14:09.424 | 99.00th=[61080], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:14:09.424 | 99.99th=[63177] 00:14:09.424 write: IOPS=2706, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1002msec); 0 zone resets 00:14:09.424 slat (usec): min=3, max=6786, avg=116.90, stdev=582.28 00:14:09.424 clat (usec): min=1294, max=29379, avg=15055.28, stdev=3369.28 00:14:09.424 lat (usec): min=4187, max=29387, avg=15172.17, stdev=3340.46 00:14:09.424 clat percentiles (usec): 00:14:09.424 | 1.00th=[ 4555], 5.00th=[10290], 10.00th=[10945], 20.00th=[11863], 00:14:09.424 | 30.00th=[15008], 40.00th=[15533], 50.00th=[15795], 60.00th=[15926], 00:14:09.424 | 70.00th=[15926], 80.00th=[16057], 90.00th=[17957], 95.00th=[20579], 00:14:09.424 | 99.00th=[27919], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:14:09.424 | 99.99th=[29492] 00:14:09.424 bw ( KiB/s): min= 8392, max=12288, per=15.97%, avg=10340.00, stdev=2754.89, samples=2 00:14:09.424 iops : min= 2098, max= 3072, avg=2585.00, stdev=688.72, samples=2 00:14:09.424 lat (msec) : 2=0.02%, 10=2.98%, 20=54.67%, 50=35.28%, 100=7.06% 00:14:09.424 cpu : usr=2.60%, sys=2.90%, ctx=269, majf=0, minf=15 00:14:09.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:09.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:09.424 issued rwts: total=2560,2712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:09.424 job2: (groupid=0, jobs=1): err= 0: pid=2296255: Mon Jul 15 10:27:03 2024 00:14:09.424 read: IOPS=4458, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1005msec) 00:14:09.424 slat (usec): min=2, max=13907, avg=116.04, stdev=701.37 00:14:09.424 clat (usec): min=911, max=50188, avg=14782.20, stdev=6199.32 00:14:09.424 lat (usec): min=4823, max=50204, avg=14898.24, stdev=6258.02 00:14:09.424 clat percentiles (usec): 00:14:09.424 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[11076], 20.00th=[11994], 00:14:09.424 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13173], 60.00th=[13566], 00:14:09.424 | 70.00th=[14091], 80.00th=[15008], 90.00th=[17957], 95.00th=[33817], 00:14:09.424 | 99.00th=[40109], 99.50th=[43254], 99.90th=[43779], 99.95th=[46924], 00:14:09.424 | 99.99th=[50070] 00:14:09.424 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:14:09.424 slat (usec): min=3, max=14491, avg=92.17, stdev=620.88 00:14:09.424 clat (usec): min=965, max=36690, avg=13299.48, stdev=4134.56 00:14:09.424 lat (usec): min=973, max=36702, avg=13391.65, stdev=4141.52 00:14:09.424 clat percentiles (usec): 00:14:09.424 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10945], 00:14:09.424 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[12911], 00:14:09.424 | 70.00th=[13566], 80.00th=[15139], 90.00th=[16909], 95.00th=[19006], 00:14:09.424 | 99.00th=[32113], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:14:09.424 | 99.99th=[36439] 00:14:09.424 bw ( KiB/s): min=16384, max=20480, per=28.46%, avg=18432.00, stdev=2896.31, samples=2 00:14:09.424 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:14:09.424 lat (usec) : 1000=0.04% 00:14:09.424 lat (msec) : 2=0.04%, 4=0.01%, 10=10.17%, 20=83.63%, 50=6.10% 00:14:09.424 lat (msec) : 100=0.01% 00:14:09.424 cpu : usr=4.48%, sys=7.17%, ctx=303, majf=0, minf=9 00:14:09.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:09.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:09.424 issued rwts: total=4481,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:09.424 job3: (groupid=0, jobs=1): err= 0: pid=2296256: Mon Jul 15 10:27:03 2024 00:14:09.424 read: IOPS=4679, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1005msec) 00:14:09.424 slat (usec): min=2, max=6260, avg=97.97, stdev=571.25 00:14:09.424 clat (usec): min=1003, max=20843, avg=12953.64, stdev=2229.20 00:14:09.424 lat (usec): min=1011, max=20857, avg=13051.61, stdev=2274.57 00:14:09.424 clat percentiles (usec): 00:14:09.424 | 1.00th=[ 4752], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11731], 00:14:09.424 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:14:09.424 | 70.00th=[13435], 80.00th=[14353], 90.00th=[15664], 95.00th=[16581], 00:14:09.424 | 99.00th=[19530], 99.50th=[20317], 99.90th=[20317], 99.95th=[20317], 00:14:09.425 | 99.99th=[20841] 00:14:09.425 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:14:09.425 slat (usec): min=4, max=26350, avg=94.60, stdev=656.27 00:14:09.425 clat (usec): min=1345, max=34659, avg=12970.77, stdev=3423.68 00:14:09.425 lat (usec): min=1355, max=49497, avg=13065.37, stdev=3466.67 00:14:09.425 clat percentiles (usec): 00:14:09.425 | 1.00th=[ 7439], 5.00th=[10028], 10.00th=[10945], 20.00th=[11600], 00:14:09.425 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:14:09.425 | 70.00th=[12911], 80.00th=[13698], 90.00th=[15270], 95.00th=[17433], 00:14:09.425 | 99.00th=[28967], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:14:09.425 | 99.99th=[34866] 00:14:09.425 bw ( KiB/s): min=20224, max=20480, per=31.43%, avg=20352.00, stdev=181.02, samples=2 00:14:09.425 iops : min= 5056, max= 5120, avg=5088.00, stdev=45.25, samples=2 00:14:09.425 lat (msec) : 2=0.11%, 4=0.25%, 10=4.73%, 20=92.77%, 50=2.13% 00:14:09.425 cpu : usr=5.38%, sys=9.26%, ctx=402, majf=0, minf=15 00:14:09.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:09.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:09.425 issued rwts: total=4703,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.425 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:09.425 00:14:09.425 Run status group 0 (all jobs): 00:14:09.425 READ: bw=59.6MiB/s (62.5MB/s), 9.98MiB/s-18.3MiB/s (10.5MB/s-19.2MB/s), io=59.9MiB (62.8MB), run=1002-1005msec 00:14:09.425 WRITE: bw=63.2MiB/s (66.3MB/s), 10.6MiB/s-19.9MiB/s (11.1MB/s-20.9MB/s), io=63.6MiB (66.6MB), run=1002-1005msec 00:14:09.425 00:14:09.425 Disk stats (read/write): 00:14:09.425 nvme0n1: ios=3112/3183, merge=0/0, ticks=22179/29318, in_queue=51497, util=89.48% 00:14:09.425 nvme0n2: ios=2072/2135, merge=0/0, ticks=21126/9013, in_queue=30139, util=96.54% 00:14:09.425 nvme0n3: ios=4131/4139, merge=0/0, ticks=25587/28375, in_queue=53962, util=89.54% 00:14:09.425 nvme0n4: ios=4153/4391, merge=0/0, ticks=28005/27606, in_queue=55611, util=96.73% 00:14:09.425 10:27:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:09.425 10:27:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2296391 00:14:09.425 10:27:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:09.425 10:27:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:09.425 [global] 00:14:09.425 thread=1 00:14:09.425 invalidate=1 00:14:09.425 rw=read 00:14:09.425 time_based=1 00:14:09.425 runtime=10 00:14:09.425 ioengine=libaio 00:14:09.425 direct=1 00:14:09.425 bs=4096 00:14:09.425 iodepth=1 00:14:09.425 norandommap=1 00:14:09.425 numjobs=1 00:14:09.425 00:14:09.425 [job0] 00:14:09.425 filename=/dev/nvme0n1 00:14:09.425 [job1] 00:14:09.425 filename=/dev/nvme0n2 00:14:09.425 [job2] 00:14:09.425 filename=/dev/nvme0n3 00:14:09.425 [job3] 00:14:09.425 filename=/dev/nvme0n4 00:14:09.425 Could not set queue depth (nvme0n1) 00:14:09.425 Could not set queue depth (nvme0n2) 00:14:09.425 Could not set queue depth (nvme0n3) 00:14:09.425 Could not set queue depth (nvme0n4) 00:14:09.425 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:09.425 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:09.425 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:09.425 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:09.425 fio-3.35 00:14:09.425 Starting 4 threads 00:14:12.704 10:27:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:12.704 10:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:12.704 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=307200, buflen=4096 00:14:12.704 fio: pid=2296489, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:12.704 10:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:12.704 10:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:12.704 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=430080, buflen=4096 00:14:12.704 fio: pid=2296488, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:12.961 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2703360, buflen=4096 00:14:12.961 fio: pid=2296486, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:12.961 10:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:12.961 10:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:13.525 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=43139072, buflen=4096 00:14:13.525 fio: pid=2296487, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:13.525 10:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.525 10:27:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:13.525 00:14:13.525 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2296486: Mon Jul 15 10:27:07 2024 00:14:13.525 read: IOPS=190, BW=761KiB/s (780kB/s)(2640KiB/3467msec) 00:14:13.525 slat (usec): min=5, max=16817, avg=72.16, stdev=962.86 00:14:13.525 clat (usec): min=249, max=46004, avg=5143.46, stdev=13354.33 00:14:13.525 lat (usec): min=255, max=57968, avg=5193.53, stdev=13443.81 00:14:13.525 clat percentiles (usec): 00:14:13.525 | 1.00th=[ 255], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 269], 00:14:13.525 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:14:13.525 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[41157], 95.00th=[42206], 00:14:13.525 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:14:13.525 | 99.99th=[45876] 00:14:13.525 bw ( KiB/s): min= 96, max= 96, per=0.79%, avg=96.00, stdev= 0.00, samples=6 00:14:13.525 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=6 00:14:13.525 lat (usec) : 250=0.15%, 500=87.90% 00:14:13.525 lat (msec) : 10=0.15%, 50=11.65% 00:14:13.525 cpu : usr=0.12%, sys=0.23%, ctx=664, majf=0, minf=1 00:14:13.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.525 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.525 issued rwts: total=661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.525 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2296487: Mon Jul 15 10:27:07 2024 00:14:13.525 read: IOPS=2813, BW=11.0MiB/s (11.5MB/s)(41.1MiB/3744msec) 00:14:13.525 slat (usec): min=4, max=26845, avg=18.16, stdev=371.66 00:14:13.525 clat (usec): min=242, max=1126, avg=332.58, stdev=41.43 00:14:13.525 lat (usec): min=248, max=27563, avg=350.74, stdev=377.80 00:14:13.525 clat percentiles (usec): 00:14:13.525 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 306], 00:14:13.525 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:14:13.525 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 379], 00:14:13.525 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 766], 99.95th=[ 881], 00:14:13.525 | 99.99th=[ 988] 00:14:13.525 bw ( KiB/s): min=10536, max=12464, per=93.27%, avg=11332.14, stdev=762.49, samples=7 00:14:13.525 iops : min= 2634, max= 3116, avg=2833.00, stdev=190.64, samples=7 00:14:13.525 lat (usec) : 250=0.21%, 500=99.53%, 750=0.13%, 1000=0.10% 00:14:13.525 lat (msec) : 2=0.01% 00:14:13.525 cpu : usr=1.98%, sys=4.27%, ctx=10540, majf=0, minf=1 00:14:13.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.525 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.525 issued rwts: total=10533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.525 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2296488: Mon Jul 15 10:27:07 2024 00:14:13.525 read: IOPS=33, BW=131KiB/s (134kB/s)(420KiB/3207msec) 00:14:13.525 slat (usec): min=5, max=10888, avg=119.43, stdev=1055.99 00:14:13.525 clat (usec): min=335, max=41997, avg=30207.33, stdev=18079.38 00:14:13.525 lat (usec): min=341, max=51950, avg=30327.75, stdev=18178.58 00:14:13.525 clat percentiles (usec): 00:14:13.525 | 1.00th=[ 338], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 375], 00:14:13.525 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:13.525 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:14:13.525 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:13.525 | 99.99th=[42206] 00:14:13.525 bw ( KiB/s): min= 96, max= 304, per=1.09%, avg=133.33, stdev=83.70, samples=6 00:14:13.525 iops : min= 24, max= 76, avg=33.33, stdev=20.93, samples=6 00:14:13.525 lat (usec) : 500=24.53%, 750=1.89% 00:14:13.525 lat (msec) : 50=72.64% 00:14:13.525 cpu : usr=0.09%, sys=0.00%, ctx=107, majf=0, minf=1 00:14:13.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.525 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.525 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.525 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2296489: Mon Jul 15 10:27:07 2024 00:14:13.525 read: IOPS=25, BW=102KiB/s (105kB/s)(300KiB/2936msec) 00:14:13.525 slat (nsec): min=11832, max=36112, avg=22677.68, stdev=9336.13 00:14:13.525 clat (usec): min=396, max=41388, avg=38822.86, stdev=9162.05 00:14:13.525 lat (usec): min=416, max=41400, avg=38845.65, stdev=9160.11 00:14:13.525 clat percentiles (usec): 00:14:13.525 | 1.00th=[ 396], 5.00th=[ 545], 10.00th=[41157], 20.00th=[41157], 00:14:13.525 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:13.525 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:13.525 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:13.525 | 99.99th=[41157] 00:14:13.525 bw ( KiB/s): min= 96, max= 120, per=0.84%, avg=102.40, stdev=10.43, samples=5 00:14:13.525 iops : min= 24, max= 30, avg=25.60, stdev= 2.61, samples=5 00:14:13.525 lat (usec) : 500=2.63%, 750=2.63% 00:14:13.525 lat (msec) : 50=93.42% 00:14:13.525 cpu : usr=0.10%, sys=0.00%, ctx=76, majf=0, minf=1 00:14:13.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.525 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.525 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.525 00:14:13.525 Run status group 0 (all jobs): 00:14:13.525 READ: bw=11.9MiB/s (12.4MB/s), 102KiB/s-11.0MiB/s (105kB/s-11.5MB/s), io=44.4MiB (46.6MB), run=2936-3744msec 00:14:13.525 00:14:13.525 Disk stats (read/write): 00:14:13.525 nvme0n1: ios=433/0, merge=0/0, ticks=3331/0, in_queue=3331, util=95.54% 00:14:13.525 nvme0n2: ios=10192/0, merge=0/0, ticks=3265/0, in_queue=3265, util=94.37% 00:14:13.525 nvme0n3: ios=103/0, merge=0/0, ticks=3092/0, in_queue=3092, util=96.48% 00:14:13.525 nvme0n4: ios=73/0, merge=0/0, ticks=2832/0, in_queue=2832, util=96.71% 00:14:13.525 10:27:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.525 10:27:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:13.782 10:27:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:13.782 10:27:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:14.039 10:27:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:14.039 10:27:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:14.602 10:27:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:14.602 10:27:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:14.602 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:14.602 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2296391 00:14:14.602 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:14.602 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.859 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.859 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:14.859 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:14.859 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.859 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:14.859 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.859 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:14.859 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:14.859 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:14.859 nvmf hotplug test: fio failed as expected 00:14:14.859 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.117 rmmod nvme_tcp 00:14:15.117 rmmod nvme_fabrics 00:14:15.117 rmmod nvme_keyring 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2294252 ']' 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2294252 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2294252 ']' 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2294252 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2294252 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2294252' 00:14:15.117 killing process with pid 2294252 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2294252 00:14:15.117 10:27:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2294252 00:14:15.375 10:27:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.375 10:27:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.375 10:27:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.375 10:27:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.375 10:27:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.375 10:27:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.375 10:27:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.375 10:27:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.965 10:27:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.965 00:14:17.965 real 0m23.508s 00:14:17.965 user 1m22.826s 00:14:17.965 sys 0m6.193s 00:14:17.965 10:27:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.965 10:27:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.965 ************************************ 00:14:17.965 END TEST nvmf_fio_target 00:14:17.965 ************************************ 00:14:17.965 10:27:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:17.965 10:27:12 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:17.965 10:27:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:17.965 10:27:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.965 10:27:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:17.965 ************************************ 00:14:17.965 START TEST nvmf_bdevio 00:14:17.965 ************************************ 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:17.965 * Looking for test storage... 00:14:17.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.965 10:27:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:19.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.888 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:19.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:19.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:19.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:19.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:14:19.889 00:14:19.889 --- 10.0.0.2 ping statistics --- 00:14:19.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.889 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:14:19.889 00:14:19.889 --- 10.0.0.1 ping statistics --- 00:14:19.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.889 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2299620 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2299620 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2299620 ']' 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.889 10:27:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:19.889 [2024-07-15 10:27:14.213975] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:19.889 [2024-07-15 10:27:14.214065] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.889 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.889 [2024-07-15 10:27:14.282474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.889 [2024-07-15 10:27:14.399827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.889 [2024-07-15 10:27:14.399896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.889 [2024-07-15 10:27:14.399914] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.889 [2024-07-15 10:27:14.399928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.889 [2024-07-15 10:27:14.399939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.889 [2024-07-15 10:27:14.400020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:19.889 [2024-07-15 10:27:14.400076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:19.889 [2024-07-15 10:27:14.400354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:19.889 [2024-07-15 10:27:14.400359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.824 [2024-07-15 10:27:15.229945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.824 Malloc0 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:20.824 [2024-07-15 10:27:15.281504] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:20.824 { 00:14:20.824 "params": { 00:14:20.824 "name": "Nvme$subsystem", 00:14:20.824 "trtype": "$TEST_TRANSPORT", 00:14:20.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:20.824 "adrfam": "ipv4", 00:14:20.824 "trsvcid": "$NVMF_PORT", 00:14:20.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:20.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:20.824 "hdgst": ${hdgst:-false}, 00:14:20.824 "ddgst": ${ddgst:-false} 00:14:20.824 }, 00:14:20.824 "method": "bdev_nvme_attach_controller" 00:14:20.824 } 00:14:20.824 EOF 00:14:20.824 )") 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:20.824 10:27:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:20.824 "params": { 00:14:20.824 "name": "Nvme1", 00:14:20.824 "trtype": "tcp", 00:14:20.824 "traddr": "10.0.0.2", 00:14:20.824 "adrfam": "ipv4", 00:14:20.824 "trsvcid": "4420", 00:14:20.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.824 "hdgst": false, 00:14:20.824 "ddgst": false 00:14:20.824 }, 00:14:20.824 "method": "bdev_nvme_attach_controller" 00:14:20.824 }' 00:14:20.824 [2024-07-15 10:27:15.327422] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:20.824 [2024-07-15 10:27:15.327502] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299777 ] 00:14:20.824 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.824 [2024-07-15 10:27:15.388567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.082 [2024-07-15 10:27:15.500780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.082 [2024-07-15 10:27:15.500830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.082 [2024-07-15 10:27:15.500833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.082 I/O targets: 00:14:21.082 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:21.082 00:14:21.082 00:14:21.082 CUnit - A unit testing framework for C - Version 2.1-3 00:14:21.082 http://cunit.sourceforge.net/ 00:14:21.082 00:14:21.082 00:14:21.082 Suite: bdevio tests on: Nvme1n1 00:14:21.340 Test: blockdev write read block ...passed 00:14:21.340 Test: blockdev write zeroes read block ...passed 00:14:21.340 Test: blockdev write zeroes read no split ...passed 00:14:21.340 Test: blockdev write zeroes read split ...passed 00:14:21.340 Test: blockdev write zeroes read split partial ...passed 00:14:21.340 Test: blockdev reset ...[2024-07-15 10:27:15.881667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:21.340 [2024-07-15 10:27:15.881794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3b580 (9): Bad file descriptor 00:14:21.340 [2024-07-15 10:27:15.893434] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:21.340 passed 00:14:21.340 Test: blockdev write read 8 blocks ...passed 00:14:21.340 Test: blockdev write read size > 128k ...passed 00:14:21.340 Test: blockdev write read invalid size ...passed 00:14:21.340 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:21.340 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:21.340 Test: blockdev write read max offset ...passed 00:14:21.599 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:21.599 Test: blockdev writev readv 8 blocks ...passed 00:14:21.599 Test: blockdev writev readv 30 x 1block ...passed 00:14:21.599 Test: blockdev writev readv block ...passed 00:14:21.599 Test: blockdev writev readv size > 128k ...passed 00:14:21.599 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:21.599 Test: blockdev comparev and writev ...[2024-07-15 10:27:16.110800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.599 [2024-07-15 10:27:16.110836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:21.599 [2024-07-15 10:27:16.110860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.599 [2024-07-15 10:27:16.110886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:21.599 [2024-07-15 10:27:16.111260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.599 [2024-07-15 10:27:16.111285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:21.599 [2024-07-15 10:27:16.111307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.599 [2024-07-15 10:27:16.111323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:21.599 [2024-07-15 10:27:16.111682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.599 [2024-07-15 10:27:16.111706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:21.599 [2024-07-15 10:27:16.111726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.599 [2024-07-15 10:27:16.111742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:21.599 [2024-07-15 10:27:16.112107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.599 [2024-07-15 10:27:16.112131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:21.599 [2024-07-15 10:27:16.112152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.599 [2024-07-15 10:27:16.112167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:21.599 passed 00:14:21.599 Test: blockdev nvme passthru rw ...passed 00:14:21.599 Test: blockdev nvme passthru vendor specific ...[2024-07-15 10:27:16.195265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.599 [2024-07-15 10:27:16.195291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:21.599 [2024-07-15 10:27:16.195477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.599 [2024-07-15 10:27:16.195506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:21.599 [2024-07-15 10:27:16.195688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.599 [2024-07-15 10:27:16.195712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:21.599 [2024-07-15 10:27:16.195892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.599 [2024-07-15 10:27:16.195916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:21.599 passed 00:14:21.599 Test: blockdev nvme admin passthru ...passed 00:14:21.858 Test: blockdev copy ...passed 00:14:21.858 00:14:21.858 Run Summary: Type Total Ran Passed Failed Inactive 00:14:21.858 suites 1 1 n/a 0 0 00:14:21.858 tests 23 23 23 0 0 00:14:21.858 asserts 152 152 152 0 n/a 00:14:21.858 00:14:21.858 Elapsed time = 1.079 seconds 00:14:21.858 10:27:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.858 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.858 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:21.858 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.858 10:27:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:21.858 10:27:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:21.858 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.858 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.116 rmmod nvme_tcp 00:14:22.116 rmmod nvme_fabrics 00:14:22.116 rmmod nvme_keyring 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2299620 ']' 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2299620 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2299620 ']' 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2299620 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2299620 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2299620' 00:14:22.116 killing process with pid 2299620 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2299620 00:14:22.116 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2299620 00:14:22.374 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.374 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.374 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.374 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.374 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.374 10:27:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.374 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.374 10:27:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.280 10:27:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.280 00:14:24.280 real 0m6.815s 00:14:24.280 user 0m12.831s 00:14:24.280 sys 0m1.960s 00:14:24.280 10:27:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:24.280 10:27:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:24.280 ************************************ 00:14:24.280 END TEST nvmf_bdevio 00:14:24.280 ************************************ 00:14:24.538 10:27:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:24.538 10:27:18 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:24.538 10:27:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:24.538 10:27:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.538 10:27:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.538 ************************************ 00:14:24.538 START TEST nvmf_auth_target 00:14:24.538 ************************************ 00:14:24.538 10:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:24.538 * Looking for test storage... 00:14:24.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.538 10:27:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.539 10:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:26.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:26.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:26.441 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.441 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:26.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:14:26.442 00:14:26.442 --- 10.0.0.2 ping statistics --- 00:14:26.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.442 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:14:26.442 00:14:26.442 --- 10.0.0.1 ping statistics --- 00:14:26.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.442 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.442 10:27:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2301834 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2301834 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2301834 ']' 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.442 10:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2301992 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=66409a32fd43827ef30a12aa6fb273a99bd3ac2a1aa0500f 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nFc 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 66409a32fd43827ef30a12aa6fb273a99bd3ac2a1aa0500f 0 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 66409a32fd43827ef30a12aa6fb273a99bd3ac2a1aa0500f 0 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=66409a32fd43827ef30a12aa6fb273a99bd3ac2a1aa0500f 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nFc 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nFc 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.nFc 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1b48b41f98cc98634d042b45ec68d3b1b48b40362048a7351ddfc74784ef5350 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nzF 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1b48b41f98cc98634d042b45ec68d3b1b48b40362048a7351ddfc74784ef5350 3 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1b48b41f98cc98634d042b45ec68d3b1b48b40362048a7351ddfc74784ef5350 3 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1b48b41f98cc98634d042b45ec68d3b1b48b40362048a7351ddfc74784ef5350 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nzF 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nzF 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.nzF 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7301e4c69ca7ae88ad6df474ba92d562 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2yN 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7301e4c69ca7ae88ad6df474ba92d562 1 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7301e4c69ca7ae88ad6df474ba92d562 1 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7301e4c69ca7ae88ad6df474ba92d562 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2yN 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2yN 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.2yN 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:27.817 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=69727a311dcbcde3dfcbff418c5647c840214d326bf1990f 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Bcl 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 69727a311dcbcde3dfcbff418c5647c840214d326bf1990f 2 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 69727a311dcbcde3dfcbff418c5647c840214d326bf1990f 2 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=69727a311dcbcde3dfcbff418c5647c840214d326bf1990f 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Bcl 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Bcl 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Bcl 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c18e0b1ac4253294d6b5cc683cb90bfccc08e5605095e774 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Ui1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c18e0b1ac4253294d6b5cc683cb90bfccc08e5605095e774 2 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c18e0b1ac4253294d6b5cc683cb90bfccc08e5605095e774 2 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c18e0b1ac4253294d6b5cc683cb90bfccc08e5605095e774 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Ui1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Ui1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Ui1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=af2e7c20d171c952d218d147a5613df5 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.vm3 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key af2e7c20d171c952d218d147a5613df5 1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 af2e7c20d171c952d218d147a5613df5 1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=af2e7c20d171c952d218d147a5613df5 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.vm3 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.vm3 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.vm3 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dedb622280e65f1eb1f40cc585f429d30e54832c67cae04c25594d6bdb11276c 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SIb 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dedb622280e65f1eb1f40cc585f429d30e54832c67cae04c25594d6bdb11276c 3 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dedb622280e65f1eb1f40cc585f429d30e54832c67cae04c25594d6bdb11276c 3 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dedb622280e65f1eb1f40cc585f429d30e54832c67cae04c25594d6bdb11276c 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SIb 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SIb 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.SIb 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2301834 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2301834 ']' 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.818 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.383 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.383 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:28.383 10:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2301992 /var/tmp/host.sock 00:14:28.383 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2301992 ']' 00:14:28.383 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:28.383 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.383 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:28.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:28.383 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.383 10:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.383 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.383 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:28.383 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:28.383 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.383 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.641 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.641 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:28.641 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nFc 00:14:28.641 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.641 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.641 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.641 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.nFc 00:14:28.641 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.nFc 00:14:28.899 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.nzF ]] 00:14:28.899 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nzF 00:14:28.899 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.899 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.899 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.899 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nzF 00:14:28.899 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nzF 00:14:29.156 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:29.156 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2yN 00:14:29.156 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.156 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.156 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.2yN 00:14:29.156 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.2yN 00:14:29.414 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Bcl ]] 00:14:29.414 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bcl 00:14:29.414 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.414 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.414 10:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.414 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bcl 00:14:29.414 10:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bcl 00:14:29.672 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:29.672 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Ui1 00:14:29.672 10:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.672 10:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.672 10:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.672 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Ui1 00:14:29.672 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Ui1 00:14:29.930 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.vm3 ]] 00:14:29.930 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vm3 00:14:29.930 10:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.930 10:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.930 10:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.930 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vm3 00:14:29.930 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vm3 00:14:30.188 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:30.188 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SIb 00:14:30.188 10:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.188 10:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.188 10:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.188 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.SIb 00:14:30.188 10:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.SIb 00:14:30.447 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:30.447 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:30.447 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.447 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.447 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:30.447 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.705 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.963 00:14:30.963 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.963 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.963 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.221 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.221 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.221 10:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.221 10:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.221 10:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.221 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.221 { 00:14:31.221 "cntlid": 1, 00:14:31.221 "qid": 0, 00:14:31.221 "state": "enabled", 00:14:31.221 "thread": "nvmf_tgt_poll_group_000", 00:14:31.221 "listen_address": { 00:14:31.221 "trtype": "TCP", 00:14:31.221 "adrfam": "IPv4", 00:14:31.221 "traddr": "10.0.0.2", 00:14:31.221 "trsvcid": "4420" 00:14:31.221 }, 00:14:31.221 "peer_address": { 00:14:31.221 "trtype": "TCP", 00:14:31.221 "adrfam": "IPv4", 00:14:31.221 "traddr": "10.0.0.1", 00:14:31.221 "trsvcid": "35490" 00:14:31.221 }, 00:14:31.221 "auth": { 00:14:31.221 "state": "completed", 00:14:31.221 "digest": "sha256", 00:14:31.221 "dhgroup": "null" 00:14:31.221 } 00:14:31.221 } 00:14:31.221 ]' 00:14:31.221 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.479 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.479 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.479 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:31.479 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.479 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.479 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.479 10:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.738 10:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:14:32.676 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.676 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:32.676 10:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.676 10:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.676 10:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.676 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.676 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:32.676 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.974 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.234 00:14:33.234 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.234 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.234 10:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.493 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.493 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.493 10:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.493 10:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.493 10:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.751 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.751 { 00:14:33.751 "cntlid": 3, 00:14:33.751 "qid": 0, 00:14:33.751 "state": "enabled", 00:14:33.751 "thread": "nvmf_tgt_poll_group_000", 00:14:33.751 "listen_address": { 00:14:33.751 "trtype": "TCP", 00:14:33.751 "adrfam": "IPv4", 00:14:33.751 "traddr": "10.0.0.2", 00:14:33.751 "trsvcid": "4420" 00:14:33.751 }, 00:14:33.751 "peer_address": { 00:14:33.751 "trtype": "TCP", 00:14:33.751 "adrfam": "IPv4", 00:14:33.751 "traddr": "10.0.0.1", 00:14:33.751 "trsvcid": "35522" 00:14:33.751 }, 00:14:33.751 "auth": { 00:14:33.751 "state": "completed", 00:14:33.751 "digest": "sha256", 00:14:33.751 "dhgroup": "null" 00:14:33.751 } 00:14:33.751 } 00:14:33.751 ]' 00:14:33.751 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.751 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.751 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.751 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:33.751 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.751 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.751 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.751 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.058 10:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:14:34.995 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.995 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.995 10:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.995 10:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.995 10:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.995 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.995 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:34.995 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.254 10:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.512 00:14:35.512 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.512 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.512 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.770 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.770 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.770 10:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.770 10:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.770 10:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.770 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.770 { 00:14:35.770 "cntlid": 5, 00:14:35.770 "qid": 0, 00:14:35.770 "state": "enabled", 00:14:35.770 "thread": "nvmf_tgt_poll_group_000", 00:14:35.770 "listen_address": { 00:14:35.770 "trtype": "TCP", 00:14:35.770 "adrfam": "IPv4", 00:14:35.770 "traddr": "10.0.0.2", 00:14:35.770 "trsvcid": "4420" 00:14:35.770 }, 00:14:35.770 "peer_address": { 00:14:35.770 "trtype": "TCP", 00:14:35.770 "adrfam": "IPv4", 00:14:35.770 "traddr": "10.0.0.1", 00:14:35.770 "trsvcid": "35548" 00:14:35.770 }, 00:14:35.770 "auth": { 00:14:35.770 "state": "completed", 00:14:35.770 "digest": "sha256", 00:14:35.770 "dhgroup": "null" 00:14:35.770 } 00:14:35.770 } 00:14:35.770 ]' 00:14:35.770 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.770 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.770 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.029 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:36.029 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.029 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.029 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.029 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.287 10:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:14:37.224 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.224 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.224 10:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.224 10:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.224 10:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.224 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.224 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:37.224 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:37.481 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:37.481 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.481 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.481 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:37.481 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:37.481 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.481 10:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:37.481 10:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.481 10:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.481 10:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.481 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.481 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.738 00:14:37.738 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.738 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.738 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.997 { 00:14:37.997 "cntlid": 7, 00:14:37.997 "qid": 0, 00:14:37.997 "state": "enabled", 00:14:37.997 "thread": "nvmf_tgt_poll_group_000", 00:14:37.997 "listen_address": { 00:14:37.997 "trtype": "TCP", 00:14:37.997 "adrfam": "IPv4", 00:14:37.997 "traddr": "10.0.0.2", 00:14:37.997 "trsvcid": "4420" 00:14:37.997 }, 00:14:37.997 "peer_address": { 00:14:37.997 "trtype": "TCP", 00:14:37.997 "adrfam": "IPv4", 00:14:37.997 "traddr": "10.0.0.1", 00:14:37.997 "trsvcid": "40756" 00:14:37.997 }, 00:14:37.997 "auth": { 00:14:37.997 "state": "completed", 00:14:37.997 "digest": "sha256", 00:14:37.997 "dhgroup": "null" 00:14:37.997 } 00:14:37.997 } 00:14:37.997 ]' 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:37.997 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.256 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.256 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.256 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.515 10:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:14:39.471 10:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.471 10:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.471 10:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.471 10:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.471 10:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.471 10:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.471 10:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.471 10:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:39.471 10:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.729 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.986 00:14:39.986 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.986 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.986 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.244 { 00:14:40.244 "cntlid": 9, 00:14:40.244 "qid": 0, 00:14:40.244 "state": "enabled", 00:14:40.244 "thread": "nvmf_tgt_poll_group_000", 00:14:40.244 "listen_address": { 00:14:40.244 "trtype": "TCP", 00:14:40.244 "adrfam": "IPv4", 00:14:40.244 "traddr": "10.0.0.2", 00:14:40.244 "trsvcid": "4420" 00:14:40.244 }, 00:14:40.244 "peer_address": { 00:14:40.244 "trtype": "TCP", 00:14:40.244 "adrfam": "IPv4", 00:14:40.244 "traddr": "10.0.0.1", 00:14:40.244 "trsvcid": "40774" 00:14:40.244 }, 00:14:40.244 "auth": { 00:14:40.244 "state": "completed", 00:14:40.244 "digest": "sha256", 00:14:40.244 "dhgroup": "ffdhe2048" 00:14:40.244 } 00:14:40.244 } 00:14:40.244 ]' 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.244 10:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.504 10:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:14:41.442 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.442 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.442 10:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.442 10:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.442 10:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.442 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.442 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:41.442 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:41.700 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:41.700 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.700 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.700 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:41.700 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:41.700 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.700 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.700 10:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.700 10:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.958 10:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.958 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.958 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.217 00:14:42.217 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.217 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.217 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.474 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.474 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.474 10:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.474 10:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.474 10:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.474 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.474 { 00:14:42.474 "cntlid": 11, 00:14:42.474 "qid": 0, 00:14:42.474 "state": "enabled", 00:14:42.474 "thread": "nvmf_tgt_poll_group_000", 00:14:42.474 "listen_address": { 00:14:42.474 "trtype": "TCP", 00:14:42.474 "adrfam": "IPv4", 00:14:42.474 "traddr": "10.0.0.2", 00:14:42.474 "trsvcid": "4420" 00:14:42.474 }, 00:14:42.474 "peer_address": { 00:14:42.474 "trtype": "TCP", 00:14:42.474 "adrfam": "IPv4", 00:14:42.474 "traddr": "10.0.0.1", 00:14:42.474 "trsvcid": "40800" 00:14:42.474 }, 00:14:42.475 "auth": { 00:14:42.475 "state": "completed", 00:14:42.475 "digest": "sha256", 00:14:42.475 "dhgroup": "ffdhe2048" 00:14:42.475 } 00:14:42.475 } 00:14:42.475 ]' 00:14:42.475 10:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.475 10:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.475 10:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.475 10:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:42.475 10:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.475 10:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.475 10:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.475 10:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.733 10:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:14:43.672 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.672 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.672 10:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.672 10:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.672 10:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.672 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.672 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:43.672 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.931 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.499 00:14:44.499 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.500 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.500 10:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.500 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.500 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.500 10:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.500 10:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.500 10:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.500 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.500 { 00:14:44.500 "cntlid": 13, 00:14:44.500 "qid": 0, 00:14:44.500 "state": "enabled", 00:14:44.500 "thread": "nvmf_tgt_poll_group_000", 00:14:44.500 "listen_address": { 00:14:44.500 "trtype": "TCP", 00:14:44.500 "adrfam": "IPv4", 00:14:44.500 "traddr": "10.0.0.2", 00:14:44.500 "trsvcid": "4420" 00:14:44.500 }, 00:14:44.500 "peer_address": { 00:14:44.500 "trtype": "TCP", 00:14:44.500 "adrfam": "IPv4", 00:14:44.500 "traddr": "10.0.0.1", 00:14:44.500 "trsvcid": "40830" 00:14:44.500 }, 00:14:44.500 "auth": { 00:14:44.500 "state": "completed", 00:14:44.500 "digest": "sha256", 00:14:44.500 "dhgroup": "ffdhe2048" 00:14:44.500 } 00:14:44.500 } 00:14:44.500 ]' 00:14:44.500 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.758 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.758 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.758 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:44.758 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.758 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.758 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.758 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.015 10:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:14:45.950 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.950 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.950 10:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.950 10:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.950 10:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.950 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.950 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:45.950 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.208 10:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.466 00:14:46.466 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.466 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.466 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.763 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.763 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.763 10:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.763 10:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.763 10:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.763 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.763 { 00:14:46.763 "cntlid": 15, 00:14:46.763 "qid": 0, 00:14:46.763 "state": "enabled", 00:14:46.763 "thread": "nvmf_tgt_poll_group_000", 00:14:46.763 "listen_address": { 00:14:46.763 "trtype": "TCP", 00:14:46.763 "adrfam": "IPv4", 00:14:46.763 "traddr": "10.0.0.2", 00:14:46.763 "trsvcid": "4420" 00:14:46.763 }, 00:14:46.763 "peer_address": { 00:14:46.763 "trtype": "TCP", 00:14:46.763 "adrfam": "IPv4", 00:14:46.763 "traddr": "10.0.0.1", 00:14:46.763 "trsvcid": "40874" 00:14:46.763 }, 00:14:46.763 "auth": { 00:14:46.763 "state": "completed", 00:14:46.763 "digest": "sha256", 00:14:46.763 "dhgroup": "ffdhe2048" 00:14:46.763 } 00:14:46.763 } 00:14:46.763 ]' 00:14:46.763 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.021 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.022 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.022 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:47.022 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.022 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.022 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.022 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.279 10:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:14:48.215 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.215 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.215 10:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.215 10:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.215 10:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.215 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.215 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.215 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:48.215 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.474 10:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.733 00:14:48.733 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.733 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.733 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.991 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.991 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.991 10:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.991 10:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.991 10:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.991 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.991 { 00:14:48.991 "cntlid": 17, 00:14:48.991 "qid": 0, 00:14:48.991 "state": "enabled", 00:14:48.991 "thread": "nvmf_tgt_poll_group_000", 00:14:48.991 "listen_address": { 00:14:48.991 "trtype": "TCP", 00:14:48.991 "adrfam": "IPv4", 00:14:48.991 "traddr": "10.0.0.2", 00:14:48.991 "trsvcid": "4420" 00:14:48.991 }, 00:14:48.991 "peer_address": { 00:14:48.991 "trtype": "TCP", 00:14:48.991 "adrfam": "IPv4", 00:14:48.991 "traddr": "10.0.0.1", 00:14:48.991 "trsvcid": "32956" 00:14:48.991 }, 00:14:48.991 "auth": { 00:14:48.991 "state": "completed", 00:14:48.991 "digest": "sha256", 00:14:48.991 "dhgroup": "ffdhe3072" 00:14:48.991 } 00:14:48.991 } 00:14:48.991 ]' 00:14:48.991 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.991 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.991 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.248 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:49.248 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.248 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.248 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.248 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.506 10:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:14:50.442 10:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.442 10:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.442 10:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.442 10:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.442 10:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.442 10:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.442 10:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:50.442 10:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.700 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.958 00:14:50.958 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.958 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.958 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.216 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.216 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.216 10:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.216 10:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.216 10:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.216 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.216 { 00:14:51.216 "cntlid": 19, 00:14:51.216 "qid": 0, 00:14:51.216 "state": "enabled", 00:14:51.216 "thread": "nvmf_tgt_poll_group_000", 00:14:51.216 "listen_address": { 00:14:51.216 "trtype": "TCP", 00:14:51.216 "adrfam": "IPv4", 00:14:51.216 "traddr": "10.0.0.2", 00:14:51.216 "trsvcid": "4420" 00:14:51.216 }, 00:14:51.216 "peer_address": { 00:14:51.216 "trtype": "TCP", 00:14:51.216 "adrfam": "IPv4", 00:14:51.216 "traddr": "10.0.0.1", 00:14:51.216 "trsvcid": "33004" 00:14:51.216 }, 00:14:51.216 "auth": { 00:14:51.216 "state": "completed", 00:14:51.216 "digest": "sha256", 00:14:51.216 "dhgroup": "ffdhe3072" 00:14:51.216 } 00:14:51.216 } 00:14:51.216 ]' 00:14:51.216 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.216 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.216 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.474 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:51.474 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.474 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.474 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.474 10:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.732 10:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:14:52.667 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.667 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.667 10:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.667 10:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.667 10:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.667 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.667 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:52.667 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.924 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.181 00:14:53.181 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.181 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.181 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.438 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.438 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.438 10:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.438 10:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.438 10:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.438 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.438 { 00:14:53.438 "cntlid": 21, 00:14:53.438 "qid": 0, 00:14:53.438 "state": "enabled", 00:14:53.438 "thread": "nvmf_tgt_poll_group_000", 00:14:53.438 "listen_address": { 00:14:53.438 "trtype": "TCP", 00:14:53.438 "adrfam": "IPv4", 00:14:53.438 "traddr": "10.0.0.2", 00:14:53.438 "trsvcid": "4420" 00:14:53.438 }, 00:14:53.438 "peer_address": { 00:14:53.438 "trtype": "TCP", 00:14:53.438 "adrfam": "IPv4", 00:14:53.438 "traddr": "10.0.0.1", 00:14:53.438 "trsvcid": "33042" 00:14:53.438 }, 00:14:53.438 "auth": { 00:14:53.438 "state": "completed", 00:14:53.438 "digest": "sha256", 00:14:53.438 "dhgroup": "ffdhe3072" 00:14:53.438 } 00:14:53.438 } 00:14:53.438 ]' 00:14:53.438 10:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.438 10:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.438 10:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.438 10:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:53.438 10:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.696 10:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.697 10:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.697 10:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.954 10:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:14:54.888 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.888 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.888 10:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.888 10:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.888 10:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.888 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.888 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.888 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.146 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.404 00:14:55.404 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.404 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.404 10:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.662 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.662 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.662 10:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.662 10:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.662 10:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.662 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.662 { 00:14:55.662 "cntlid": 23, 00:14:55.662 "qid": 0, 00:14:55.662 "state": "enabled", 00:14:55.662 "thread": "nvmf_tgt_poll_group_000", 00:14:55.662 "listen_address": { 00:14:55.662 "trtype": "TCP", 00:14:55.662 "adrfam": "IPv4", 00:14:55.662 "traddr": "10.0.0.2", 00:14:55.662 "trsvcid": "4420" 00:14:55.662 }, 00:14:55.662 "peer_address": { 00:14:55.662 "trtype": "TCP", 00:14:55.662 "adrfam": "IPv4", 00:14:55.662 "traddr": "10.0.0.1", 00:14:55.662 "trsvcid": "33070" 00:14:55.662 }, 00:14:55.662 "auth": { 00:14:55.662 "state": "completed", 00:14:55.662 "digest": "sha256", 00:14:55.662 "dhgroup": "ffdhe3072" 00:14:55.662 } 00:14:55.662 } 00:14:55.662 ]' 00:14:55.662 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.663 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.663 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.920 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.920 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.920 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.920 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.921 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.179 10:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:14:57.114 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.114 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.114 10:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.114 10:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.114 10:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.114 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.114 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.114 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:57.114 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.372 10:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.629 00:14:57.629 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.629 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.629 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.886 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.886 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.886 10:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.886 10:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.886 10:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.886 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.886 { 00:14:57.886 "cntlid": 25, 00:14:57.886 "qid": 0, 00:14:57.886 "state": "enabled", 00:14:57.886 "thread": "nvmf_tgt_poll_group_000", 00:14:57.886 "listen_address": { 00:14:57.886 "trtype": "TCP", 00:14:57.886 "adrfam": "IPv4", 00:14:57.886 "traddr": "10.0.0.2", 00:14:57.886 "trsvcid": "4420" 00:14:57.886 }, 00:14:57.886 "peer_address": { 00:14:57.886 "trtype": "TCP", 00:14:57.886 "adrfam": "IPv4", 00:14:57.886 "traddr": "10.0.0.1", 00:14:57.886 "trsvcid": "50888" 00:14:57.886 }, 00:14:57.886 "auth": { 00:14:57.886 "state": "completed", 00:14:57.886 "digest": "sha256", 00:14:57.886 "dhgroup": "ffdhe4096" 00:14:57.886 } 00:14:57.886 } 00:14:57.886 ]' 00:14:57.886 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.143 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.143 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.143 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:58.143 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.143 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.143 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.143 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.400 10:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:14:59.335 10:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.335 10:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.335 10:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.335 10:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.335 10:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.335 10:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.335 10:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:59.335 10:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.593 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.164 00:15:00.164 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.164 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.164 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.468 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.468 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.468 10:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.468 10:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.468 10:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.468 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.468 { 00:15:00.468 "cntlid": 27, 00:15:00.468 "qid": 0, 00:15:00.468 "state": "enabled", 00:15:00.468 "thread": "nvmf_tgt_poll_group_000", 00:15:00.468 "listen_address": { 00:15:00.468 "trtype": "TCP", 00:15:00.468 "adrfam": "IPv4", 00:15:00.468 "traddr": "10.0.0.2", 00:15:00.468 "trsvcid": "4420" 00:15:00.468 }, 00:15:00.469 "peer_address": { 00:15:00.469 "trtype": "TCP", 00:15:00.469 "adrfam": "IPv4", 00:15:00.469 "traddr": "10.0.0.1", 00:15:00.469 "trsvcid": "50908" 00:15:00.469 }, 00:15:00.469 "auth": { 00:15:00.469 "state": "completed", 00:15:00.469 "digest": "sha256", 00:15:00.469 "dhgroup": "ffdhe4096" 00:15:00.469 } 00:15:00.469 } 00:15:00.469 ]' 00:15:00.469 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.469 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.469 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.469 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.469 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.469 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.469 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.469 10:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.725 10:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:15:01.658 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.658 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.658 10:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.658 10:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.658 10:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.658 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.658 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.658 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.915 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:01.915 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.915 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.915 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:01.915 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:01.915 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.916 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.916 10:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.916 10:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.916 10:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.916 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.916 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.482 00:15:02.483 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.483 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.483 10:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.740 { 00:15:02.740 "cntlid": 29, 00:15:02.740 "qid": 0, 00:15:02.740 "state": "enabled", 00:15:02.740 "thread": "nvmf_tgt_poll_group_000", 00:15:02.740 "listen_address": { 00:15:02.740 "trtype": "TCP", 00:15:02.740 "adrfam": "IPv4", 00:15:02.740 "traddr": "10.0.0.2", 00:15:02.740 "trsvcid": "4420" 00:15:02.740 }, 00:15:02.740 "peer_address": { 00:15:02.740 "trtype": "TCP", 00:15:02.740 "adrfam": "IPv4", 00:15:02.740 "traddr": "10.0.0.1", 00:15:02.740 "trsvcid": "50940" 00:15:02.740 }, 00:15:02.740 "auth": { 00:15:02.740 "state": "completed", 00:15:02.740 "digest": "sha256", 00:15:02.740 "dhgroup": "ffdhe4096" 00:15:02.740 } 00:15:02.740 } 00:15:02.740 ]' 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.740 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.998 10:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:15:03.930 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.930 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.930 10:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.930 10:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.930 10:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.930 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.930 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.930 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.188 10:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.755 00:15:04.755 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.755 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.755 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.013 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.013 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.013 10:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.013 10:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.013 10:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.013 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.013 { 00:15:05.013 "cntlid": 31, 00:15:05.013 "qid": 0, 00:15:05.013 "state": "enabled", 00:15:05.014 "thread": "nvmf_tgt_poll_group_000", 00:15:05.014 "listen_address": { 00:15:05.014 "trtype": "TCP", 00:15:05.014 "adrfam": "IPv4", 00:15:05.014 "traddr": "10.0.0.2", 00:15:05.014 "trsvcid": "4420" 00:15:05.014 }, 00:15:05.014 "peer_address": { 00:15:05.014 "trtype": "TCP", 00:15:05.014 "adrfam": "IPv4", 00:15:05.014 "traddr": "10.0.0.1", 00:15:05.014 "trsvcid": "50968" 00:15:05.014 }, 00:15:05.014 "auth": { 00:15:05.014 "state": "completed", 00:15:05.014 "digest": "sha256", 00:15:05.014 "dhgroup": "ffdhe4096" 00:15:05.014 } 00:15:05.014 } 00:15:05.014 ]' 00:15:05.014 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.014 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.014 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.014 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:05.014 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.014 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.014 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.014 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.272 10:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:15:06.649 10:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.649 10:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.649 10:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.649 10:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.649 10:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.649 10:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.649 10:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.649 10:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:06.649 10:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.649 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.216 00:15:07.216 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.216 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.216 10:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.473 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.473 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.473 10:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.473 10:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.473 10:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.473 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.473 { 00:15:07.474 "cntlid": 33, 00:15:07.474 "qid": 0, 00:15:07.474 "state": "enabled", 00:15:07.474 "thread": "nvmf_tgt_poll_group_000", 00:15:07.474 "listen_address": { 00:15:07.474 "trtype": "TCP", 00:15:07.474 "adrfam": "IPv4", 00:15:07.474 "traddr": "10.0.0.2", 00:15:07.474 "trsvcid": "4420" 00:15:07.474 }, 00:15:07.474 "peer_address": { 00:15:07.474 "trtype": "TCP", 00:15:07.474 "adrfam": "IPv4", 00:15:07.474 "traddr": "10.0.0.1", 00:15:07.474 "trsvcid": "45532" 00:15:07.474 }, 00:15:07.474 "auth": { 00:15:07.474 "state": "completed", 00:15:07.474 "digest": "sha256", 00:15:07.474 "dhgroup": "ffdhe6144" 00:15:07.474 } 00:15:07.474 } 00:15:07.474 ]' 00:15:07.474 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.474 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.474 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.474 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.474 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.732 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.732 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.732 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.992 10:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:15:08.928 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.928 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.928 10:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.928 10:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.928 10:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.928 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.928 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:08.928 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.186 10:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.751 00:15:09.751 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.751 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.751 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.009 { 00:15:10.009 "cntlid": 35, 00:15:10.009 "qid": 0, 00:15:10.009 "state": "enabled", 00:15:10.009 "thread": "nvmf_tgt_poll_group_000", 00:15:10.009 "listen_address": { 00:15:10.009 "trtype": "TCP", 00:15:10.009 "adrfam": "IPv4", 00:15:10.009 "traddr": "10.0.0.2", 00:15:10.009 "trsvcid": "4420" 00:15:10.009 }, 00:15:10.009 "peer_address": { 00:15:10.009 "trtype": "TCP", 00:15:10.009 "adrfam": "IPv4", 00:15:10.009 "traddr": "10.0.0.1", 00:15:10.009 "trsvcid": "45556" 00:15:10.009 }, 00:15:10.009 "auth": { 00:15:10.009 "state": "completed", 00:15:10.009 "digest": "sha256", 00:15:10.009 "dhgroup": "ffdhe6144" 00:15:10.009 } 00:15:10.009 } 00:15:10.009 ]' 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.009 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.269 10:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:15:11.207 10:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.207 10:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.207 10:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.207 10:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.207 10:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.207 10:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.207 10:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.207 10:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.465 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.033 00:15:12.033 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.033 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.033 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.291 { 00:15:12.291 "cntlid": 37, 00:15:12.291 "qid": 0, 00:15:12.291 "state": "enabled", 00:15:12.291 "thread": "nvmf_tgt_poll_group_000", 00:15:12.291 "listen_address": { 00:15:12.291 "trtype": "TCP", 00:15:12.291 "adrfam": "IPv4", 00:15:12.291 "traddr": "10.0.0.2", 00:15:12.291 "trsvcid": "4420" 00:15:12.291 }, 00:15:12.291 "peer_address": { 00:15:12.291 "trtype": "TCP", 00:15:12.291 "adrfam": "IPv4", 00:15:12.291 "traddr": "10.0.0.1", 00:15:12.291 "trsvcid": "45580" 00:15:12.291 }, 00:15:12.291 "auth": { 00:15:12.291 "state": "completed", 00:15:12.291 "digest": "sha256", 00:15:12.291 "dhgroup": "ffdhe6144" 00:15:12.291 } 00:15:12.291 } 00:15:12.291 ]' 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:12.291 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.549 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.549 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.549 10:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.808 10:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:15:13.740 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.740 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.740 10:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.740 10:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.740 10:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.740 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.740 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.740 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.997 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.642 00:15:14.642 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.642 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.642 10:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.642 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.642 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.642 10:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.643 10:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.643 10:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.643 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.643 { 00:15:14.643 "cntlid": 39, 00:15:14.643 "qid": 0, 00:15:14.643 "state": "enabled", 00:15:14.643 "thread": "nvmf_tgt_poll_group_000", 00:15:14.643 "listen_address": { 00:15:14.643 "trtype": "TCP", 00:15:14.643 "adrfam": "IPv4", 00:15:14.643 "traddr": "10.0.0.2", 00:15:14.643 "trsvcid": "4420" 00:15:14.643 }, 00:15:14.643 "peer_address": { 00:15:14.643 "trtype": "TCP", 00:15:14.643 "adrfam": "IPv4", 00:15:14.643 "traddr": "10.0.0.1", 00:15:14.643 "trsvcid": "45608" 00:15:14.643 }, 00:15:14.643 "auth": { 00:15:14.643 "state": "completed", 00:15:14.643 "digest": "sha256", 00:15:14.643 "dhgroup": "ffdhe6144" 00:15:14.643 } 00:15:14.643 } 00:15:14.643 ]' 00:15:14.643 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.899 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.899 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.899 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:14.899 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.899 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.899 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.899 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.155 10:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:15:16.088 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.088 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.088 10:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.088 10:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.088 10:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.088 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.088 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.088 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:16.088 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.346 10:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.282 00:15:17.282 10:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.282 10:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.282 10:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.539 { 00:15:17.539 "cntlid": 41, 00:15:17.539 "qid": 0, 00:15:17.539 "state": "enabled", 00:15:17.539 "thread": "nvmf_tgt_poll_group_000", 00:15:17.539 "listen_address": { 00:15:17.539 "trtype": "TCP", 00:15:17.539 "adrfam": "IPv4", 00:15:17.539 "traddr": "10.0.0.2", 00:15:17.539 "trsvcid": "4420" 00:15:17.539 }, 00:15:17.539 "peer_address": { 00:15:17.539 "trtype": "TCP", 00:15:17.539 "adrfam": "IPv4", 00:15:17.539 "traddr": "10.0.0.1", 00:15:17.539 "trsvcid": "50316" 00:15:17.539 }, 00:15:17.539 "auth": { 00:15:17.539 "state": "completed", 00:15:17.539 "digest": "sha256", 00:15:17.539 "dhgroup": "ffdhe8192" 00:15:17.539 } 00:15:17.539 } 00:15:17.539 ]' 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.539 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.104 10:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:15:19.038 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.038 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.038 10:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.038 10:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.038 10:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.038 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.038 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:19.038 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:19.296 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:19.296 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.296 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:19.297 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:19.297 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:19.297 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.297 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.297 10:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.297 10:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.297 10:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.297 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.297 10:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.233 00:15:20.233 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.233 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.233 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.233 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.233 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.233 10:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.233 10:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.233 10:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.233 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.233 { 00:15:20.233 "cntlid": 43, 00:15:20.233 "qid": 0, 00:15:20.233 "state": "enabled", 00:15:20.233 "thread": "nvmf_tgt_poll_group_000", 00:15:20.233 "listen_address": { 00:15:20.233 "trtype": "TCP", 00:15:20.233 "adrfam": "IPv4", 00:15:20.233 "traddr": "10.0.0.2", 00:15:20.233 "trsvcid": "4420" 00:15:20.233 }, 00:15:20.233 "peer_address": { 00:15:20.233 "trtype": "TCP", 00:15:20.233 "adrfam": "IPv4", 00:15:20.233 "traddr": "10.0.0.1", 00:15:20.233 "trsvcid": "50332" 00:15:20.233 }, 00:15:20.233 "auth": { 00:15:20.233 "state": "completed", 00:15:20.233 "digest": "sha256", 00:15:20.233 "dhgroup": "ffdhe8192" 00:15:20.233 } 00:15:20.233 } 00:15:20.233 ]' 00:15:20.233 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.491 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.491 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.491 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.491 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.491 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.491 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.491 10:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.750 10:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:15:21.719 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.719 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.719 10:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.719 10:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.719 10:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.719 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.719 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.720 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.977 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:21.977 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.978 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:21.978 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:21.978 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:21.978 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.978 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.978 10:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.978 10:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.978 10:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.978 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.978 10:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.915 00:15:22.915 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.915 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.915 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.173 { 00:15:23.173 "cntlid": 45, 00:15:23.173 "qid": 0, 00:15:23.173 "state": "enabled", 00:15:23.173 "thread": "nvmf_tgt_poll_group_000", 00:15:23.173 "listen_address": { 00:15:23.173 "trtype": "TCP", 00:15:23.173 "adrfam": "IPv4", 00:15:23.173 "traddr": "10.0.0.2", 00:15:23.173 "trsvcid": "4420" 00:15:23.173 }, 00:15:23.173 "peer_address": { 00:15:23.173 "trtype": "TCP", 00:15:23.173 "adrfam": "IPv4", 00:15:23.173 "traddr": "10.0.0.1", 00:15:23.173 "trsvcid": "50358" 00:15:23.173 }, 00:15:23.173 "auth": { 00:15:23.173 "state": "completed", 00:15:23.173 "digest": "sha256", 00:15:23.173 "dhgroup": "ffdhe8192" 00:15:23.173 } 00:15:23.173 } 00:15:23.173 ]' 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.173 10:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.433 10:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.809 10:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.745 00:15:25.745 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.745 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.745 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.003 { 00:15:26.003 "cntlid": 47, 00:15:26.003 "qid": 0, 00:15:26.003 "state": "enabled", 00:15:26.003 "thread": "nvmf_tgt_poll_group_000", 00:15:26.003 "listen_address": { 00:15:26.003 "trtype": "TCP", 00:15:26.003 "adrfam": "IPv4", 00:15:26.003 "traddr": "10.0.0.2", 00:15:26.003 "trsvcid": "4420" 00:15:26.003 }, 00:15:26.003 "peer_address": { 00:15:26.003 "trtype": "TCP", 00:15:26.003 "adrfam": "IPv4", 00:15:26.003 "traddr": "10.0.0.1", 00:15:26.003 "trsvcid": "50370" 00:15:26.003 }, 00:15:26.003 "auth": { 00:15:26.003 "state": "completed", 00:15:26.003 "digest": "sha256", 00:15:26.003 "dhgroup": "ffdhe8192" 00:15:26.003 } 00:15:26.003 } 00:15:26.003 ]' 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.003 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.261 10:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:15:27.198 10:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.198 10:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.198 10:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.198 10:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.198 10:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.198 10:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:27.198 10:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.198 10:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.198 10:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.198 10:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.764 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.023 00:15:28.023 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.023 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.023 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.282 { 00:15:28.282 "cntlid": 49, 00:15:28.282 "qid": 0, 00:15:28.282 "state": "enabled", 00:15:28.282 "thread": "nvmf_tgt_poll_group_000", 00:15:28.282 "listen_address": { 00:15:28.282 "trtype": "TCP", 00:15:28.282 "adrfam": "IPv4", 00:15:28.282 "traddr": "10.0.0.2", 00:15:28.282 "trsvcid": "4420" 00:15:28.282 }, 00:15:28.282 "peer_address": { 00:15:28.282 "trtype": "TCP", 00:15:28.282 "adrfam": "IPv4", 00:15:28.282 "traddr": "10.0.0.1", 00:15:28.282 "trsvcid": "47100" 00:15:28.282 }, 00:15:28.282 "auth": { 00:15:28.282 "state": "completed", 00:15:28.282 "digest": "sha384", 00:15:28.282 "dhgroup": "null" 00:15:28.282 } 00:15:28.282 } 00:15:28.282 ]' 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.282 10:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.540 10:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:15:29.513 10:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.513 10:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.513 10:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.513 10:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.513 10:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.513 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.513 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:29.513 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.772 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.029 00:15:30.029 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.029 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.029 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.287 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.287 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.287 10:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.287 10:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.287 10:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.287 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.288 { 00:15:30.288 "cntlid": 51, 00:15:30.288 "qid": 0, 00:15:30.288 "state": "enabled", 00:15:30.288 "thread": "nvmf_tgt_poll_group_000", 00:15:30.288 "listen_address": { 00:15:30.288 "trtype": "TCP", 00:15:30.288 "adrfam": "IPv4", 00:15:30.288 "traddr": "10.0.0.2", 00:15:30.288 "trsvcid": "4420" 00:15:30.288 }, 00:15:30.288 "peer_address": { 00:15:30.288 "trtype": "TCP", 00:15:30.288 "adrfam": "IPv4", 00:15:30.288 "traddr": "10.0.0.1", 00:15:30.288 "trsvcid": "47128" 00:15:30.288 }, 00:15:30.288 "auth": { 00:15:30.288 "state": "completed", 00:15:30.288 "digest": "sha384", 00:15:30.288 "dhgroup": "null" 00:15:30.288 } 00:15:30.288 } 00:15:30.288 ]' 00:15:30.288 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.288 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.288 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.288 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:30.288 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.545 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.545 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.545 10:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.545 10:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:15:31.481 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.739 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.739 10:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.739 10:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.739 10:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.739 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.739 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.739 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.997 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.255 00:15:32.255 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.255 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.255 10:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.513 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.513 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.514 { 00:15:32.514 "cntlid": 53, 00:15:32.514 "qid": 0, 00:15:32.514 "state": "enabled", 00:15:32.514 "thread": "nvmf_tgt_poll_group_000", 00:15:32.514 "listen_address": { 00:15:32.514 "trtype": "TCP", 00:15:32.514 "adrfam": "IPv4", 00:15:32.514 "traddr": "10.0.0.2", 00:15:32.514 "trsvcid": "4420" 00:15:32.514 }, 00:15:32.514 "peer_address": { 00:15:32.514 "trtype": "TCP", 00:15:32.514 "adrfam": "IPv4", 00:15:32.514 "traddr": "10.0.0.1", 00:15:32.514 "trsvcid": "47168" 00:15:32.514 }, 00:15:32.514 "auth": { 00:15:32.514 "state": "completed", 00:15:32.514 "digest": "sha384", 00:15:32.514 "dhgroup": "null" 00:15:32.514 } 00:15:32.514 } 00:15:32.514 ]' 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.514 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.079 10:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:15:34.040 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.040 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.040 10:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.040 10:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.040 10:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.040 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.041 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:34.041 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.300 10:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.558 00:15:34.558 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.558 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.558 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.815 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.816 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.816 10:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.816 10:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.816 10:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.816 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.816 { 00:15:34.816 "cntlid": 55, 00:15:34.816 "qid": 0, 00:15:34.816 "state": "enabled", 00:15:34.816 "thread": "nvmf_tgt_poll_group_000", 00:15:34.816 "listen_address": { 00:15:34.816 "trtype": "TCP", 00:15:34.816 "adrfam": "IPv4", 00:15:34.816 "traddr": "10.0.0.2", 00:15:34.816 "trsvcid": "4420" 00:15:34.816 }, 00:15:34.816 "peer_address": { 00:15:34.816 "trtype": "TCP", 00:15:34.816 "adrfam": "IPv4", 00:15:34.816 "traddr": "10.0.0.1", 00:15:34.816 "trsvcid": "47196" 00:15:34.816 }, 00:15:34.816 "auth": { 00:15:34.816 "state": "completed", 00:15:34.816 "digest": "sha384", 00:15:34.816 "dhgroup": "null" 00:15:34.816 } 00:15:34.816 } 00:15:34.816 ]' 00:15:34.816 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.816 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.816 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.816 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:34.816 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.074 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.074 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.074 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.332 10:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:15:36.267 10:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.267 10:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.267 10:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.267 10:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.267 10:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.267 10:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.267 10:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.267 10:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.267 10:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.525 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:36.525 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.525 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.526 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:36.526 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:36.526 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.526 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.526 10:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.526 10:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.526 10:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.526 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.526 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.784 00:15:36.784 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.784 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.784 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.042 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.042 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.042 10:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.042 10:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.042 10:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.042 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.042 { 00:15:37.042 "cntlid": 57, 00:15:37.042 "qid": 0, 00:15:37.042 "state": "enabled", 00:15:37.042 "thread": "nvmf_tgt_poll_group_000", 00:15:37.042 "listen_address": { 00:15:37.042 "trtype": "TCP", 00:15:37.042 "adrfam": "IPv4", 00:15:37.042 "traddr": "10.0.0.2", 00:15:37.042 "trsvcid": "4420" 00:15:37.042 }, 00:15:37.042 "peer_address": { 00:15:37.042 "trtype": "TCP", 00:15:37.042 "adrfam": "IPv4", 00:15:37.042 "traddr": "10.0.0.1", 00:15:37.042 "trsvcid": "49738" 00:15:37.042 }, 00:15:37.042 "auth": { 00:15:37.042 "state": "completed", 00:15:37.042 "digest": "sha384", 00:15:37.042 "dhgroup": "ffdhe2048" 00:15:37.042 } 00:15:37.042 } 00:15:37.042 ]' 00:15:37.042 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.042 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.042 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.298 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.298 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.298 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.298 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.298 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.556 10:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:15:38.487 10:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.487 10:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.487 10:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.487 10:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.487 10:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.487 10:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.487 10:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.487 10:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.743 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.000 00:15:39.000 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.000 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.000 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.256 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.256 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.256 10:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.256 10:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.256 10:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.256 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.256 { 00:15:39.256 "cntlid": 59, 00:15:39.256 "qid": 0, 00:15:39.256 "state": "enabled", 00:15:39.256 "thread": "nvmf_tgt_poll_group_000", 00:15:39.256 "listen_address": { 00:15:39.256 "trtype": "TCP", 00:15:39.256 "adrfam": "IPv4", 00:15:39.256 "traddr": "10.0.0.2", 00:15:39.256 "trsvcid": "4420" 00:15:39.256 }, 00:15:39.256 "peer_address": { 00:15:39.256 "trtype": "TCP", 00:15:39.256 "adrfam": "IPv4", 00:15:39.256 "traddr": "10.0.0.1", 00:15:39.256 "trsvcid": "49768" 00:15:39.256 }, 00:15:39.256 "auth": { 00:15:39.256 "state": "completed", 00:15:39.256 "digest": "sha384", 00:15:39.256 "dhgroup": "ffdhe2048" 00:15:39.256 } 00:15:39.256 } 00:15:39.256 ]' 00:15:39.256 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.513 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.513 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.513 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.513 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.513 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.513 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.513 10:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.771 10:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:15:40.702 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.702 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.702 10:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.702 10:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.702 10:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.702 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.702 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.702 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.265 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.564 00:15:41.564 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.564 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.564 10:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.564 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.564 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.564 10:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.564 10:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.838 10:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.838 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.838 { 00:15:41.838 "cntlid": 61, 00:15:41.838 "qid": 0, 00:15:41.838 "state": "enabled", 00:15:41.838 "thread": "nvmf_tgt_poll_group_000", 00:15:41.838 "listen_address": { 00:15:41.838 "trtype": "TCP", 00:15:41.838 "adrfam": "IPv4", 00:15:41.838 "traddr": "10.0.0.2", 00:15:41.838 "trsvcid": "4420" 00:15:41.838 }, 00:15:41.838 "peer_address": { 00:15:41.838 "trtype": "TCP", 00:15:41.838 "adrfam": "IPv4", 00:15:41.838 "traddr": "10.0.0.1", 00:15:41.838 "trsvcid": "49798" 00:15:41.838 }, 00:15:41.838 "auth": { 00:15:41.838 "state": "completed", 00:15:41.838 "digest": "sha384", 00:15:41.838 "dhgroup": "ffdhe2048" 00:15:41.838 } 00:15:41.838 } 00:15:41.838 ]' 00:15:41.838 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.838 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.838 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.838 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.838 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.838 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.838 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.838 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.097 10:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:15:43.044 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.044 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.044 10:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.044 10:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.044 10:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.044 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.044 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.044 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.302 10:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.559 00:15:43.560 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.560 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.560 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.817 { 00:15:43.817 "cntlid": 63, 00:15:43.817 "qid": 0, 00:15:43.817 "state": "enabled", 00:15:43.817 "thread": "nvmf_tgt_poll_group_000", 00:15:43.817 "listen_address": { 00:15:43.817 "trtype": "TCP", 00:15:43.817 "adrfam": "IPv4", 00:15:43.817 "traddr": "10.0.0.2", 00:15:43.817 "trsvcid": "4420" 00:15:43.817 }, 00:15:43.817 "peer_address": { 00:15:43.817 "trtype": "TCP", 00:15:43.817 "adrfam": "IPv4", 00:15:43.817 "traddr": "10.0.0.1", 00:15:43.817 "trsvcid": "49830" 00:15:43.817 }, 00:15:43.817 "auth": { 00:15:43.817 "state": "completed", 00:15:43.817 "digest": "sha384", 00:15:43.817 "dhgroup": "ffdhe2048" 00:15:43.817 } 00:15:43.817 } 00:15:43.817 ]' 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.817 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.077 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.077 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.077 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.337 10:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:15:45.273 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.273 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.273 10:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.273 10:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.273 10:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.274 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.274 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.274 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.274 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.533 10:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.791 00:15:45.791 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.791 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.791 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.048 { 00:15:46.048 "cntlid": 65, 00:15:46.048 "qid": 0, 00:15:46.048 "state": "enabled", 00:15:46.048 "thread": "nvmf_tgt_poll_group_000", 00:15:46.048 "listen_address": { 00:15:46.048 "trtype": "TCP", 00:15:46.048 "adrfam": "IPv4", 00:15:46.048 "traddr": "10.0.0.2", 00:15:46.048 "trsvcid": "4420" 00:15:46.048 }, 00:15:46.048 "peer_address": { 00:15:46.048 "trtype": "TCP", 00:15:46.048 "adrfam": "IPv4", 00:15:46.048 "traddr": "10.0.0.1", 00:15:46.048 "trsvcid": "49860" 00:15:46.048 }, 00:15:46.048 "auth": { 00:15:46.048 "state": "completed", 00:15:46.048 "digest": "sha384", 00:15:46.048 "dhgroup": "ffdhe3072" 00:15:46.048 } 00:15:46.048 } 00:15:46.048 ]' 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.048 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.307 10:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:15:47.242 10:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.242 10:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.242 10:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.242 10:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.242 10:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.242 10:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.242 10:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.242 10:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.500 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.070 00:15:48.070 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.070 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.070 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.070 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.070 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.070 10:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.070 10:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.070 10:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.070 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.070 { 00:15:48.070 "cntlid": 67, 00:15:48.070 "qid": 0, 00:15:48.070 "state": "enabled", 00:15:48.070 "thread": "nvmf_tgt_poll_group_000", 00:15:48.070 "listen_address": { 00:15:48.070 "trtype": "TCP", 00:15:48.070 "adrfam": "IPv4", 00:15:48.070 "traddr": "10.0.0.2", 00:15:48.070 "trsvcid": "4420" 00:15:48.070 }, 00:15:48.070 "peer_address": { 00:15:48.070 "trtype": "TCP", 00:15:48.070 "adrfam": "IPv4", 00:15:48.070 "traddr": "10.0.0.1", 00:15:48.070 "trsvcid": "34786" 00:15:48.070 }, 00:15:48.070 "auth": { 00:15:48.070 "state": "completed", 00:15:48.070 "digest": "sha384", 00:15:48.070 "dhgroup": "ffdhe3072" 00:15:48.070 } 00:15:48.070 } 00:15:48.070 ]' 00:15:48.070 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.328 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.328 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.328 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.328 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.328 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.328 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.328 10:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.586 10:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:15:49.520 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.520 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.520 10:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.520 10:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.520 10:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.520 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.520 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.520 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.778 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.346 00:15:50.346 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.346 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.346 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.346 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.346 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.346 10:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.346 10:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.346 10:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.346 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.346 { 00:15:50.346 "cntlid": 69, 00:15:50.346 "qid": 0, 00:15:50.346 "state": "enabled", 00:15:50.346 "thread": "nvmf_tgt_poll_group_000", 00:15:50.346 "listen_address": { 00:15:50.346 "trtype": "TCP", 00:15:50.346 "adrfam": "IPv4", 00:15:50.346 "traddr": "10.0.0.2", 00:15:50.346 "trsvcid": "4420" 00:15:50.346 }, 00:15:50.346 "peer_address": { 00:15:50.346 "trtype": "TCP", 00:15:50.346 "adrfam": "IPv4", 00:15:50.346 "traddr": "10.0.0.1", 00:15:50.346 "trsvcid": "34808" 00:15:50.346 }, 00:15:50.346 "auth": { 00:15:50.346 "state": "completed", 00:15:50.346 "digest": "sha384", 00:15:50.346 "dhgroup": "ffdhe3072" 00:15:50.346 } 00:15:50.346 } 00:15:50.346 ]' 00:15:50.346 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.604 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.604 10:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.604 10:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.604 10:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.604 10:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.604 10:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.604 10:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.863 10:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:15:51.798 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.798 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.798 10:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.798 10:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.798 10:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.798 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.798 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.798 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.056 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.315 00:15:52.315 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.315 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.315 10:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.573 { 00:15:52.573 "cntlid": 71, 00:15:52.573 "qid": 0, 00:15:52.573 "state": "enabled", 00:15:52.573 "thread": "nvmf_tgt_poll_group_000", 00:15:52.573 "listen_address": { 00:15:52.573 "trtype": "TCP", 00:15:52.573 "adrfam": "IPv4", 00:15:52.573 "traddr": "10.0.0.2", 00:15:52.573 "trsvcid": "4420" 00:15:52.573 }, 00:15:52.573 "peer_address": { 00:15:52.573 "trtype": "TCP", 00:15:52.573 "adrfam": "IPv4", 00:15:52.573 "traddr": "10.0.0.1", 00:15:52.573 "trsvcid": "34842" 00:15:52.573 }, 00:15:52.573 "auth": { 00:15:52.573 "state": "completed", 00:15:52.573 "digest": "sha384", 00:15:52.573 "dhgroup": "ffdhe3072" 00:15:52.573 } 00:15:52.573 } 00:15:52.573 ]' 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.573 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.831 10:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.208 10:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.775 00:15:54.775 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.775 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.775 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.033 { 00:15:55.033 "cntlid": 73, 00:15:55.033 "qid": 0, 00:15:55.033 "state": "enabled", 00:15:55.033 "thread": "nvmf_tgt_poll_group_000", 00:15:55.033 "listen_address": { 00:15:55.033 "trtype": "TCP", 00:15:55.033 "adrfam": "IPv4", 00:15:55.033 "traddr": "10.0.0.2", 00:15:55.033 "trsvcid": "4420" 00:15:55.033 }, 00:15:55.033 "peer_address": { 00:15:55.033 "trtype": "TCP", 00:15:55.033 "adrfam": "IPv4", 00:15:55.033 "traddr": "10.0.0.1", 00:15:55.033 "trsvcid": "34874" 00:15:55.033 }, 00:15:55.033 "auth": { 00:15:55.033 "state": "completed", 00:15:55.033 "digest": "sha384", 00:15:55.033 "dhgroup": "ffdhe4096" 00:15:55.033 } 00:15:55.033 } 00:15:55.033 ]' 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.033 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.292 10:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:15:56.225 10:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.225 10:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.225 10:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.225 10:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.225 10:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.225 10:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.225 10:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.226 10:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.794 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.053 00:15:57.053 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.053 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.053 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.310 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.311 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.311 10:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.311 10:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.311 10:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.311 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.311 { 00:15:57.311 "cntlid": 75, 00:15:57.311 "qid": 0, 00:15:57.311 "state": "enabled", 00:15:57.311 "thread": "nvmf_tgt_poll_group_000", 00:15:57.311 "listen_address": { 00:15:57.311 "trtype": "TCP", 00:15:57.311 "adrfam": "IPv4", 00:15:57.311 "traddr": "10.0.0.2", 00:15:57.311 "trsvcid": "4420" 00:15:57.311 }, 00:15:57.311 "peer_address": { 00:15:57.311 "trtype": "TCP", 00:15:57.311 "adrfam": "IPv4", 00:15:57.311 "traddr": "10.0.0.1", 00:15:57.311 "trsvcid": "52270" 00:15:57.311 }, 00:15:57.311 "auth": { 00:15:57.311 "state": "completed", 00:15:57.311 "digest": "sha384", 00:15:57.311 "dhgroup": "ffdhe4096" 00:15:57.311 } 00:15:57.311 } 00:15:57.311 ]' 00:15:57.311 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.311 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.311 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.311 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.311 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.570 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.570 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.570 10:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.570 10:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:15:58.949 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.949 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.949 10:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.949 10:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.949 10:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.949 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.950 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.517 00:15:59.517 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.517 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.517 10:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.774 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.774 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.775 { 00:15:59.775 "cntlid": 77, 00:15:59.775 "qid": 0, 00:15:59.775 "state": "enabled", 00:15:59.775 "thread": "nvmf_tgt_poll_group_000", 00:15:59.775 "listen_address": { 00:15:59.775 "trtype": "TCP", 00:15:59.775 "adrfam": "IPv4", 00:15:59.775 "traddr": "10.0.0.2", 00:15:59.775 "trsvcid": "4420" 00:15:59.775 }, 00:15:59.775 "peer_address": { 00:15:59.775 "trtype": "TCP", 00:15:59.775 "adrfam": "IPv4", 00:15:59.775 "traddr": "10.0.0.1", 00:15:59.775 "trsvcid": "52288" 00:15:59.775 }, 00:15:59.775 "auth": { 00:15:59.775 "state": "completed", 00:15:59.775 "digest": "sha384", 00:15:59.775 "dhgroup": "ffdhe4096" 00:15:59.775 } 00:15:59.775 } 00:15:59.775 ]' 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.775 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.063 10:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:16:01.000 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.000 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.000 10:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.000 10:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.000 10:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.000 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.000 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.000 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.258 10:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.825 00:16:01.825 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.825 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.826 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.084 { 00:16:02.084 "cntlid": 79, 00:16:02.084 "qid": 0, 00:16:02.084 "state": "enabled", 00:16:02.084 "thread": "nvmf_tgt_poll_group_000", 00:16:02.084 "listen_address": { 00:16:02.084 "trtype": "TCP", 00:16:02.084 "adrfam": "IPv4", 00:16:02.084 "traddr": "10.0.0.2", 00:16:02.084 "trsvcid": "4420" 00:16:02.084 }, 00:16:02.084 "peer_address": { 00:16:02.084 "trtype": "TCP", 00:16:02.084 "adrfam": "IPv4", 00:16:02.084 "traddr": "10.0.0.1", 00:16:02.084 "trsvcid": "52320" 00:16:02.084 }, 00:16:02.084 "auth": { 00:16:02.084 "state": "completed", 00:16:02.084 "digest": "sha384", 00:16:02.084 "dhgroup": "ffdhe4096" 00:16:02.084 } 00:16:02.084 } 00:16:02.084 ]' 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.084 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.342 10:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:16:03.277 10:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.277 10:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.277 10:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.277 10:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.277 10:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.277 10:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.277 10:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.277 10:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.277 10:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.536 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.105 00:16:04.105 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.105 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.105 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.390 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.390 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.390 10:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.390 10:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.390 10:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.390 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.390 { 00:16:04.390 "cntlid": 81, 00:16:04.390 "qid": 0, 00:16:04.390 "state": "enabled", 00:16:04.390 "thread": "nvmf_tgt_poll_group_000", 00:16:04.390 "listen_address": { 00:16:04.390 "trtype": "TCP", 00:16:04.390 "adrfam": "IPv4", 00:16:04.390 "traddr": "10.0.0.2", 00:16:04.390 "trsvcid": "4420" 00:16:04.390 }, 00:16:04.390 "peer_address": { 00:16:04.390 "trtype": "TCP", 00:16:04.390 "adrfam": "IPv4", 00:16:04.390 "traddr": "10.0.0.1", 00:16:04.390 "trsvcid": "52346" 00:16:04.390 }, 00:16:04.390 "auth": { 00:16:04.390 "state": "completed", 00:16:04.390 "digest": "sha384", 00:16:04.390 "dhgroup": "ffdhe6144" 00:16:04.390 } 00:16:04.390 } 00:16:04.390 ]' 00:16:04.390 10:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.390 10:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.390 10:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.647 10:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.647 10:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.647 10:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.647 10:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.647 10:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.903 10:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:16:05.833 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.833 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.833 10:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.833 10:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.833 10:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.833 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.833 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.833 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.089 10:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.651 00:16:06.651 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.651 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.651 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.907 { 00:16:06.907 "cntlid": 83, 00:16:06.907 "qid": 0, 00:16:06.907 "state": "enabled", 00:16:06.907 "thread": "nvmf_tgt_poll_group_000", 00:16:06.907 "listen_address": { 00:16:06.907 "trtype": "TCP", 00:16:06.907 "adrfam": "IPv4", 00:16:06.907 "traddr": "10.0.0.2", 00:16:06.907 "trsvcid": "4420" 00:16:06.907 }, 00:16:06.907 "peer_address": { 00:16:06.907 "trtype": "TCP", 00:16:06.907 "adrfam": "IPv4", 00:16:06.907 "traddr": "10.0.0.1", 00:16:06.907 "trsvcid": "52372" 00:16:06.907 }, 00:16:06.907 "auth": { 00:16:06.907 "state": "completed", 00:16:06.907 "digest": "sha384", 00:16:06.907 "dhgroup": "ffdhe6144" 00:16:06.907 } 00:16:06.907 } 00:16:06.907 ]' 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.907 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.166 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.166 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.166 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.166 10:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:16:08.540 10:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.540 10:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.540 10:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.540 10:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.540 10:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.540 10:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.540 10:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.540 10:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.540 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.108 00:16:09.108 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.108 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.108 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.365 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.365 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.365 10:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.365 10:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.365 10:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.365 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.365 { 00:16:09.365 "cntlid": 85, 00:16:09.365 "qid": 0, 00:16:09.365 "state": "enabled", 00:16:09.365 "thread": "nvmf_tgt_poll_group_000", 00:16:09.365 "listen_address": { 00:16:09.365 "trtype": "TCP", 00:16:09.365 "adrfam": "IPv4", 00:16:09.365 "traddr": "10.0.0.2", 00:16:09.365 "trsvcid": "4420" 00:16:09.365 }, 00:16:09.365 "peer_address": { 00:16:09.365 "trtype": "TCP", 00:16:09.365 "adrfam": "IPv4", 00:16:09.365 "traddr": "10.0.0.1", 00:16:09.365 "trsvcid": "51398" 00:16:09.365 }, 00:16:09.365 "auth": { 00:16:09.365 "state": "completed", 00:16:09.365 "digest": "sha384", 00:16:09.365 "dhgroup": "ffdhe6144" 00:16:09.365 } 00:16:09.365 } 00:16:09.365 ]' 00:16:09.365 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.366 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.366 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.366 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.366 10:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.366 10:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.366 10:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.366 10:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.623 10:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:16:10.558 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.558 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.558 10:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.558 10:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.558 10:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.558 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.558 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:10.558 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.816 10:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.381 00:16:11.381 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.381 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.381 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.639 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.639 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.639 10:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.639 10:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.639 10:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.639 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.639 { 00:16:11.639 "cntlid": 87, 00:16:11.639 "qid": 0, 00:16:11.639 "state": "enabled", 00:16:11.639 "thread": "nvmf_tgt_poll_group_000", 00:16:11.639 "listen_address": { 00:16:11.639 "trtype": "TCP", 00:16:11.639 "adrfam": "IPv4", 00:16:11.639 "traddr": "10.0.0.2", 00:16:11.639 "trsvcid": "4420" 00:16:11.639 }, 00:16:11.639 "peer_address": { 00:16:11.639 "trtype": "TCP", 00:16:11.639 "adrfam": "IPv4", 00:16:11.639 "traddr": "10.0.0.1", 00:16:11.639 "trsvcid": "51438" 00:16:11.639 }, 00:16:11.639 "auth": { 00:16:11.639 "state": "completed", 00:16:11.639 "digest": "sha384", 00:16:11.639 "dhgroup": "ffdhe6144" 00:16:11.639 } 00:16:11.639 } 00:16:11.639 ]' 00:16:11.639 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.897 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.897 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.897 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:11.897 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.897 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.897 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.897 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.155 10:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:16:13.090 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.090 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.090 10:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.090 10:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.090 10:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.090 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.090 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.090 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.090 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.347 10:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.322 00:16:14.322 10:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.322 10:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.322 10:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.579 10:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.579 10:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.579 10:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.579 10:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.579 10:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.579 10:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.579 { 00:16:14.579 "cntlid": 89, 00:16:14.579 "qid": 0, 00:16:14.579 "state": "enabled", 00:16:14.579 "thread": "nvmf_tgt_poll_group_000", 00:16:14.579 "listen_address": { 00:16:14.579 "trtype": "TCP", 00:16:14.579 "adrfam": "IPv4", 00:16:14.579 "traddr": "10.0.0.2", 00:16:14.579 "trsvcid": "4420" 00:16:14.579 }, 00:16:14.579 "peer_address": { 00:16:14.579 "trtype": "TCP", 00:16:14.579 "adrfam": "IPv4", 00:16:14.579 "traddr": "10.0.0.1", 00:16:14.579 "trsvcid": "51474" 00:16:14.579 }, 00:16:14.579 "auth": { 00:16:14.579 "state": "completed", 00:16:14.579 "digest": "sha384", 00:16:14.579 "dhgroup": "ffdhe8192" 00:16:14.579 } 00:16:14.579 } 00:16:14.579 ]' 00:16:14.579 10:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.579 10:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.579 10:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.579 10:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.579 10:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.579 10:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.579 10:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.580 10:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.837 10:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:16:15.775 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.775 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.775 10:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.775 10:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.775 10:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.775 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.775 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.775 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.033 10:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.968 00:16:16.968 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.968 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.968 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.226 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.226 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.226 10:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.226 10:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.226 10:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.226 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.226 { 00:16:17.226 "cntlid": 91, 00:16:17.226 "qid": 0, 00:16:17.226 "state": "enabled", 00:16:17.226 "thread": "nvmf_tgt_poll_group_000", 00:16:17.226 "listen_address": { 00:16:17.226 "trtype": "TCP", 00:16:17.226 "adrfam": "IPv4", 00:16:17.226 "traddr": "10.0.0.2", 00:16:17.226 "trsvcid": "4420" 00:16:17.226 }, 00:16:17.226 "peer_address": { 00:16:17.226 "trtype": "TCP", 00:16:17.226 "adrfam": "IPv4", 00:16:17.226 "traddr": "10.0.0.1", 00:16:17.227 "trsvcid": "51498" 00:16:17.227 }, 00:16:17.227 "auth": { 00:16:17.227 "state": "completed", 00:16:17.227 "digest": "sha384", 00:16:17.227 "dhgroup": "ffdhe8192" 00:16:17.227 } 00:16:17.227 } 00:16:17.227 ]' 00:16:17.227 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.227 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.227 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.227 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.227 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.485 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.485 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.485 10:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.744 10:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:16:18.680 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.680 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.680 10:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.680 10:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.680 10:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.680 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.680 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.680 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.937 10:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.872 00:16:19.872 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.872 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.872 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.129 { 00:16:20.129 "cntlid": 93, 00:16:20.129 "qid": 0, 00:16:20.129 "state": "enabled", 00:16:20.129 "thread": "nvmf_tgt_poll_group_000", 00:16:20.129 "listen_address": { 00:16:20.129 "trtype": "TCP", 00:16:20.129 "adrfam": "IPv4", 00:16:20.129 "traddr": "10.0.0.2", 00:16:20.129 "trsvcid": "4420" 00:16:20.129 }, 00:16:20.129 "peer_address": { 00:16:20.129 "trtype": "TCP", 00:16:20.129 "adrfam": "IPv4", 00:16:20.129 "traddr": "10.0.0.1", 00:16:20.129 "trsvcid": "39974" 00:16:20.129 }, 00:16:20.129 "auth": { 00:16:20.129 "state": "completed", 00:16:20.129 "digest": "sha384", 00:16:20.129 "dhgroup": "ffdhe8192" 00:16:20.129 } 00:16:20.129 } 00:16:20.129 ]' 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.129 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.387 10:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:16:21.322 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.322 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.322 10:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.322 10:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.322 10:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.322 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.322 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:21.322 10:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.580 10:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.513 00:16:22.513 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.513 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.513 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.771 { 00:16:22.771 "cntlid": 95, 00:16:22.771 "qid": 0, 00:16:22.771 "state": "enabled", 00:16:22.771 "thread": "nvmf_tgt_poll_group_000", 00:16:22.771 "listen_address": { 00:16:22.771 "trtype": "TCP", 00:16:22.771 "adrfam": "IPv4", 00:16:22.771 "traddr": "10.0.0.2", 00:16:22.771 "trsvcid": "4420" 00:16:22.771 }, 00:16:22.771 "peer_address": { 00:16:22.771 "trtype": "TCP", 00:16:22.771 "adrfam": "IPv4", 00:16:22.771 "traddr": "10.0.0.1", 00:16:22.771 "trsvcid": "40004" 00:16:22.771 }, 00:16:22.771 "auth": { 00:16:22.771 "state": "completed", 00:16:22.771 "digest": "sha384", 00:16:22.771 "dhgroup": "ffdhe8192" 00:16:22.771 } 00:16:22.771 } 00:16:22.771 ]' 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.771 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.772 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.337 10:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:16:24.270 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.270 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.270 10:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.270 10:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.270 10:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.270 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:24.270 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.270 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.270 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.270 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.527 10:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.784 00:16:24.784 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.784 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.784 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.042 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.042 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.042 10:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.042 10:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.042 10:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.042 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.042 { 00:16:25.042 "cntlid": 97, 00:16:25.042 "qid": 0, 00:16:25.042 "state": "enabled", 00:16:25.043 "thread": "nvmf_tgt_poll_group_000", 00:16:25.043 "listen_address": { 00:16:25.043 "trtype": "TCP", 00:16:25.043 "adrfam": "IPv4", 00:16:25.043 "traddr": "10.0.0.2", 00:16:25.043 "trsvcid": "4420" 00:16:25.043 }, 00:16:25.043 "peer_address": { 00:16:25.043 "trtype": "TCP", 00:16:25.043 "adrfam": "IPv4", 00:16:25.043 "traddr": "10.0.0.1", 00:16:25.043 "trsvcid": "40028" 00:16:25.043 }, 00:16:25.043 "auth": { 00:16:25.043 "state": "completed", 00:16:25.043 "digest": "sha512", 00:16:25.043 "dhgroup": "null" 00:16:25.043 } 00:16:25.043 } 00:16:25.043 ]' 00:16:25.043 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.043 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.043 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.043 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:25.043 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.043 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.043 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.043 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.301 10:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:16:26.236 10:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.494 10:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.494 10:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.494 10:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.494 10:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.494 10:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.494 10:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.494 10:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.752 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:26.752 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.752 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.752 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:26.752 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.752 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.752 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.753 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.753 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.753 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.753 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.753 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.010 00:16:27.011 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.011 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.011 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.268 { 00:16:27.268 "cntlid": 99, 00:16:27.268 "qid": 0, 00:16:27.268 "state": "enabled", 00:16:27.268 "thread": "nvmf_tgt_poll_group_000", 00:16:27.268 "listen_address": { 00:16:27.268 "trtype": "TCP", 00:16:27.268 "adrfam": "IPv4", 00:16:27.268 "traddr": "10.0.0.2", 00:16:27.268 "trsvcid": "4420" 00:16:27.268 }, 00:16:27.268 "peer_address": { 00:16:27.268 "trtype": "TCP", 00:16:27.268 "adrfam": "IPv4", 00:16:27.268 "traddr": "10.0.0.1", 00:16:27.268 "trsvcid": "51850" 00:16:27.268 }, 00:16:27.268 "auth": { 00:16:27.268 "state": "completed", 00:16:27.268 "digest": "sha512", 00:16:27.268 "dhgroup": "null" 00:16:27.268 } 00:16:27.268 } 00:16:27.268 ]' 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.268 10:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.526 10:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.897 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.898 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.898 10:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.898 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.898 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.154 00:16:29.154 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.154 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.154 10:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.411 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.411 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.411 10:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.411 10:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.668 10:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.668 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.668 { 00:16:29.668 "cntlid": 101, 00:16:29.668 "qid": 0, 00:16:29.668 "state": "enabled", 00:16:29.668 "thread": "nvmf_tgt_poll_group_000", 00:16:29.668 "listen_address": { 00:16:29.668 "trtype": "TCP", 00:16:29.668 "adrfam": "IPv4", 00:16:29.668 "traddr": "10.0.0.2", 00:16:29.668 "trsvcid": "4420" 00:16:29.668 }, 00:16:29.668 "peer_address": { 00:16:29.668 "trtype": "TCP", 00:16:29.668 "adrfam": "IPv4", 00:16:29.668 "traddr": "10.0.0.1", 00:16:29.668 "trsvcid": "51890" 00:16:29.668 }, 00:16:29.668 "auth": { 00:16:29.668 "state": "completed", 00:16:29.668 "digest": "sha512", 00:16:29.668 "dhgroup": "null" 00:16:29.668 } 00:16:29.668 } 00:16:29.668 ]' 00:16:29.668 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.668 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.668 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.668 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:29.668 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.668 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.668 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.668 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.926 10:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:16:30.888 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.888 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.888 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.888 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.888 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.888 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.888 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.888 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.145 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.402 00:16:31.402 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.402 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.402 10:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.659 { 00:16:31.659 "cntlid": 103, 00:16:31.659 "qid": 0, 00:16:31.659 "state": "enabled", 00:16:31.659 "thread": "nvmf_tgt_poll_group_000", 00:16:31.659 "listen_address": { 00:16:31.659 "trtype": "TCP", 00:16:31.659 "adrfam": "IPv4", 00:16:31.659 "traddr": "10.0.0.2", 00:16:31.659 "trsvcid": "4420" 00:16:31.659 }, 00:16:31.659 "peer_address": { 00:16:31.659 "trtype": "TCP", 00:16:31.659 "adrfam": "IPv4", 00:16:31.659 "traddr": "10.0.0.1", 00:16:31.659 "trsvcid": "51914" 00:16:31.659 }, 00:16:31.659 "auth": { 00:16:31.659 "state": "completed", 00:16:31.659 "digest": "sha512", 00:16:31.659 "dhgroup": "null" 00:16:31.659 } 00:16:31.659 } 00:16:31.659 ]' 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:31.659 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.918 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.918 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.918 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.177 10:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:16:33.108 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.108 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.108 10:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.108 10:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.108 10:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.108 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.108 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.108 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.108 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.368 10:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.626 00:16:33.626 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.626 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.626 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.883 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.884 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.884 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.884 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.884 10:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.884 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.884 { 00:16:33.884 "cntlid": 105, 00:16:33.884 "qid": 0, 00:16:33.884 "state": "enabled", 00:16:33.884 "thread": "nvmf_tgt_poll_group_000", 00:16:33.884 "listen_address": { 00:16:33.884 "trtype": "TCP", 00:16:33.884 "adrfam": "IPv4", 00:16:33.884 "traddr": "10.0.0.2", 00:16:33.884 "trsvcid": "4420" 00:16:33.884 }, 00:16:33.884 "peer_address": { 00:16:33.884 "trtype": "TCP", 00:16:33.884 "adrfam": "IPv4", 00:16:33.884 "traddr": "10.0.0.1", 00:16:33.884 "trsvcid": "51944" 00:16:33.884 }, 00:16:33.884 "auth": { 00:16:33.884 "state": "completed", 00:16:33.884 "digest": "sha512", 00:16:33.884 "dhgroup": "ffdhe2048" 00:16:33.884 } 00:16:33.884 } 00:16:33.884 ]' 00:16:33.884 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.884 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.884 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.884 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.884 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.143 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.143 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.143 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.402 10:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:16:35.338 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.338 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.339 10:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.339 10:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.339 10:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.339 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.339 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.339 10:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.597 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.165 00:16:36.165 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.165 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.165 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.165 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.165 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.165 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.165 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.165 10:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.165 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.165 { 00:16:36.165 "cntlid": 107, 00:16:36.165 "qid": 0, 00:16:36.165 "state": "enabled", 00:16:36.165 "thread": "nvmf_tgt_poll_group_000", 00:16:36.165 "listen_address": { 00:16:36.165 "trtype": "TCP", 00:16:36.165 "adrfam": "IPv4", 00:16:36.165 "traddr": "10.0.0.2", 00:16:36.165 "trsvcid": "4420" 00:16:36.165 }, 00:16:36.165 "peer_address": { 00:16:36.165 "trtype": "TCP", 00:16:36.165 "adrfam": "IPv4", 00:16:36.165 "traddr": "10.0.0.1", 00:16:36.165 "trsvcid": "51960" 00:16:36.165 }, 00:16:36.165 "auth": { 00:16:36.165 "state": "completed", 00:16:36.165 "digest": "sha512", 00:16:36.165 "dhgroup": "ffdhe2048" 00:16:36.165 } 00:16:36.165 } 00:16:36.165 ]' 00:16:36.165 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.423 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.423 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.423 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.423 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.423 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.423 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.423 10:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.681 10:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:16:37.618 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.618 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.618 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.618 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.618 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.618 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.618 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.618 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.876 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.135 00:16:38.135 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.135 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.135 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.392 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.392 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.392 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.392 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 10:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.392 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.392 { 00:16:38.392 "cntlid": 109, 00:16:38.392 "qid": 0, 00:16:38.392 "state": "enabled", 00:16:38.392 "thread": "nvmf_tgt_poll_group_000", 00:16:38.392 "listen_address": { 00:16:38.392 "trtype": "TCP", 00:16:38.392 "adrfam": "IPv4", 00:16:38.392 "traddr": "10.0.0.2", 00:16:38.392 "trsvcid": "4420" 00:16:38.392 }, 00:16:38.392 "peer_address": { 00:16:38.392 "trtype": "TCP", 00:16:38.392 "adrfam": "IPv4", 00:16:38.392 "traddr": "10.0.0.1", 00:16:38.392 "trsvcid": "41872" 00:16:38.392 }, 00:16:38.392 "auth": { 00:16:38.392 "state": "completed", 00:16:38.392 "digest": "sha512", 00:16:38.392 "dhgroup": "ffdhe2048" 00:16:38.392 } 00:16:38.392 } 00:16:38.392 ]' 00:16:38.392 10:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.392 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.392 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.649 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.649 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.649 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.649 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.649 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.906 10:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:16:39.845 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.845 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.845 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.845 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.845 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.845 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.845 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.845 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.103 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.360 00:16:40.360 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.360 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.360 10:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.618 { 00:16:40.618 "cntlid": 111, 00:16:40.618 "qid": 0, 00:16:40.618 "state": "enabled", 00:16:40.618 "thread": "nvmf_tgt_poll_group_000", 00:16:40.618 "listen_address": { 00:16:40.618 "trtype": "TCP", 00:16:40.618 "adrfam": "IPv4", 00:16:40.618 "traddr": "10.0.0.2", 00:16:40.618 "trsvcid": "4420" 00:16:40.618 }, 00:16:40.618 "peer_address": { 00:16:40.618 "trtype": "TCP", 00:16:40.618 "adrfam": "IPv4", 00:16:40.618 "traddr": "10.0.0.1", 00:16:40.618 "trsvcid": "41894" 00:16:40.618 }, 00:16:40.618 "auth": { 00:16:40.618 "state": "completed", 00:16:40.618 "digest": "sha512", 00:16:40.618 "dhgroup": "ffdhe2048" 00:16:40.618 } 00:16:40.618 } 00:16:40.618 ]' 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.618 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.877 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.877 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.877 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.135 10:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:16:42.071 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.071 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.071 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.071 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.071 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.071 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.071 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.071 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:42.071 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.333 10:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.590 00:16:42.590 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.590 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.590 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.848 { 00:16:42.848 "cntlid": 113, 00:16:42.848 "qid": 0, 00:16:42.848 "state": "enabled", 00:16:42.848 "thread": "nvmf_tgt_poll_group_000", 00:16:42.848 "listen_address": { 00:16:42.848 "trtype": "TCP", 00:16:42.848 "adrfam": "IPv4", 00:16:42.848 "traddr": "10.0.0.2", 00:16:42.848 "trsvcid": "4420" 00:16:42.848 }, 00:16:42.848 "peer_address": { 00:16:42.848 "trtype": "TCP", 00:16:42.848 "adrfam": "IPv4", 00:16:42.848 "traddr": "10.0.0.1", 00:16:42.848 "trsvcid": "41906" 00:16:42.848 }, 00:16:42.848 "auth": { 00:16:42.848 "state": "completed", 00:16:42.848 "digest": "sha512", 00:16:42.848 "dhgroup": "ffdhe3072" 00:16:42.848 } 00:16:42.848 } 00:16:42.848 ]' 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.848 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.106 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.106 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.106 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.364 10:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:16:44.301 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.301 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.301 10:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.301 10:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.301 10:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.301 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.301 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:44.301 10:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.559 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.844 00:16:45.106 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.106 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.106 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.106 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.106 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.106 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.106 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.106 10:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.106 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.106 { 00:16:45.106 "cntlid": 115, 00:16:45.106 "qid": 0, 00:16:45.106 "state": "enabled", 00:16:45.106 "thread": "nvmf_tgt_poll_group_000", 00:16:45.106 "listen_address": { 00:16:45.106 "trtype": "TCP", 00:16:45.106 "adrfam": "IPv4", 00:16:45.106 "traddr": "10.0.0.2", 00:16:45.106 "trsvcid": "4420" 00:16:45.106 }, 00:16:45.106 "peer_address": { 00:16:45.106 "trtype": "TCP", 00:16:45.106 "adrfam": "IPv4", 00:16:45.106 "traddr": "10.0.0.1", 00:16:45.106 "trsvcid": "41936" 00:16:45.106 }, 00:16:45.106 "auth": { 00:16:45.106 "state": "completed", 00:16:45.106 "digest": "sha512", 00:16:45.106 "dhgroup": "ffdhe3072" 00:16:45.106 } 00:16:45.106 } 00:16:45.106 ]' 00:16:45.106 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.364 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.364 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.364 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.365 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.365 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.365 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.365 10:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.623 10:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:16:46.558 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.558 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.558 10:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.558 10:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.558 10:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.559 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.559 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.559 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.817 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.384 00:16:47.384 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.384 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.384 10:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.384 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.384 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.384 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.384 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.384 10:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.384 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.384 { 00:16:47.384 "cntlid": 117, 00:16:47.384 "qid": 0, 00:16:47.384 "state": "enabled", 00:16:47.384 "thread": "nvmf_tgt_poll_group_000", 00:16:47.384 "listen_address": { 00:16:47.384 "trtype": "TCP", 00:16:47.384 "adrfam": "IPv4", 00:16:47.384 "traddr": "10.0.0.2", 00:16:47.384 "trsvcid": "4420" 00:16:47.384 }, 00:16:47.384 "peer_address": { 00:16:47.384 "trtype": "TCP", 00:16:47.384 "adrfam": "IPv4", 00:16:47.384 "traddr": "10.0.0.1", 00:16:47.384 "trsvcid": "42472" 00:16:47.384 }, 00:16:47.384 "auth": { 00:16:47.384 "state": "completed", 00:16:47.384 "digest": "sha512", 00:16:47.384 "dhgroup": "ffdhe3072" 00:16:47.384 } 00:16:47.384 } 00:16:47.384 ]' 00:16:47.384 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.644 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.644 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.644 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.644 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.644 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.644 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.644 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.903 10:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:16:48.849 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.849 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.849 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.849 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.849 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.849 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.849 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:48.849 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.106 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.364 00:16:49.364 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.364 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.364 10:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.622 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.622 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.622 10:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.622 10:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.622 10:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.622 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.622 { 00:16:49.622 "cntlid": 119, 00:16:49.622 "qid": 0, 00:16:49.622 "state": "enabled", 00:16:49.622 "thread": "nvmf_tgt_poll_group_000", 00:16:49.622 "listen_address": { 00:16:49.622 "trtype": "TCP", 00:16:49.622 "adrfam": "IPv4", 00:16:49.622 "traddr": "10.0.0.2", 00:16:49.622 "trsvcid": "4420" 00:16:49.622 }, 00:16:49.622 "peer_address": { 00:16:49.622 "trtype": "TCP", 00:16:49.622 "adrfam": "IPv4", 00:16:49.622 "traddr": "10.0.0.1", 00:16:49.622 "trsvcid": "42492" 00:16:49.622 }, 00:16:49.622 "auth": { 00:16:49.622 "state": "completed", 00:16:49.622 "digest": "sha512", 00:16:49.622 "dhgroup": "ffdhe3072" 00:16:49.622 } 00:16:49.622 } 00:16:49.622 ]' 00:16:49.622 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.879 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.879 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.879 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.879 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.879 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.879 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.879 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.138 10:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:16:51.077 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.077 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.077 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.077 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.077 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.077 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.077 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.077 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:51.077 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.336 10:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.906 00:16:51.906 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.906 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.906 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.906 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.906 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.906 10:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.906 10:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.906 10:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.906 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.906 { 00:16:51.906 "cntlid": 121, 00:16:51.906 "qid": 0, 00:16:51.906 "state": "enabled", 00:16:51.906 "thread": "nvmf_tgt_poll_group_000", 00:16:51.906 "listen_address": { 00:16:51.906 "trtype": "TCP", 00:16:51.906 "adrfam": "IPv4", 00:16:51.906 "traddr": "10.0.0.2", 00:16:51.906 "trsvcid": "4420" 00:16:51.906 }, 00:16:51.906 "peer_address": { 00:16:51.906 "trtype": "TCP", 00:16:51.906 "adrfam": "IPv4", 00:16:51.906 "traddr": "10.0.0.1", 00:16:51.906 "trsvcid": "42506" 00:16:51.906 }, 00:16:51.906 "auth": { 00:16:51.906 "state": "completed", 00:16:51.906 "digest": "sha512", 00:16:51.906 "dhgroup": "ffdhe4096" 00:16:51.906 } 00:16:51.906 } 00:16:51.906 ]' 00:16:51.906 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.164 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.164 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.164 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.164 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.164 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.164 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.164 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.430 10:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:16:53.364 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.364 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.364 10:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.364 10:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.364 10:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.364 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.364 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.364 10:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.621 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.230 00:16:54.230 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.230 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.230 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.487 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.487 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.487 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.487 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.487 10:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.487 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.487 { 00:16:54.487 "cntlid": 123, 00:16:54.487 "qid": 0, 00:16:54.487 "state": "enabled", 00:16:54.487 "thread": "nvmf_tgt_poll_group_000", 00:16:54.487 "listen_address": { 00:16:54.487 "trtype": "TCP", 00:16:54.487 "adrfam": "IPv4", 00:16:54.487 "traddr": "10.0.0.2", 00:16:54.487 "trsvcid": "4420" 00:16:54.487 }, 00:16:54.487 "peer_address": { 00:16:54.487 "trtype": "TCP", 00:16:54.487 "adrfam": "IPv4", 00:16:54.487 "traddr": "10.0.0.1", 00:16:54.487 "trsvcid": "42526" 00:16:54.487 }, 00:16:54.487 "auth": { 00:16:54.487 "state": "completed", 00:16:54.487 "digest": "sha512", 00:16:54.487 "dhgroup": "ffdhe4096" 00:16:54.487 } 00:16:54.487 } 00:16:54.487 ]' 00:16:54.487 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.487 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.488 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.488 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.488 10:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.488 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.488 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.488 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.745 10:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:16:55.681 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.681 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.681 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.681 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.681 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.681 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.681 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.681 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.939 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.506 00:16:56.506 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.506 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.506 10:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.764 { 00:16:56.764 "cntlid": 125, 00:16:56.764 "qid": 0, 00:16:56.764 "state": "enabled", 00:16:56.764 "thread": "nvmf_tgt_poll_group_000", 00:16:56.764 "listen_address": { 00:16:56.764 "trtype": "TCP", 00:16:56.764 "adrfam": "IPv4", 00:16:56.764 "traddr": "10.0.0.2", 00:16:56.764 "trsvcid": "4420" 00:16:56.764 }, 00:16:56.764 "peer_address": { 00:16:56.764 "trtype": "TCP", 00:16:56.764 "adrfam": "IPv4", 00:16:56.764 "traddr": "10.0.0.1", 00:16:56.764 "trsvcid": "42566" 00:16:56.764 }, 00:16:56.764 "auth": { 00:16:56.764 "state": "completed", 00:16:56.764 "digest": "sha512", 00:16:56.764 "dhgroup": "ffdhe4096" 00:16:56.764 } 00:16:56.764 } 00:16:56.764 ]' 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.764 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.022 10:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:16:57.960 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.960 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.960 10:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.960 10:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.960 10:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.960 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.960 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.960 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.218 10:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.478 00:16:58.739 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.739 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.739 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.739 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.739 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.739 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.739 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.997 10:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.997 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.997 { 00:16:58.997 "cntlid": 127, 00:16:58.997 "qid": 0, 00:16:58.997 "state": "enabled", 00:16:58.997 "thread": "nvmf_tgt_poll_group_000", 00:16:58.997 "listen_address": { 00:16:58.997 "trtype": "TCP", 00:16:58.997 "adrfam": "IPv4", 00:16:58.997 "traddr": "10.0.0.2", 00:16:58.997 "trsvcid": "4420" 00:16:58.998 }, 00:16:58.998 "peer_address": { 00:16:58.998 "trtype": "TCP", 00:16:58.998 "adrfam": "IPv4", 00:16:58.998 "traddr": "10.0.0.1", 00:16:58.998 "trsvcid": "37870" 00:16:58.998 }, 00:16:58.998 "auth": { 00:16:58.998 "state": "completed", 00:16:58.998 "digest": "sha512", 00:16:58.998 "dhgroup": "ffdhe4096" 00:16:58.998 } 00:16:58.998 } 00:16:58.998 ]' 00:16:58.998 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.998 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.998 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.998 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.998 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.998 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.998 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.998 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.255 10:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:17:00.263 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.263 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.263 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.263 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.263 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.263 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.263 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.263 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:00.263 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:00.520 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:00.520 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.520 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:00.521 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:00.521 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:00.521 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.521 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.521 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.521 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.521 10:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.521 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.521 10:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.091 00:17:01.091 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.091 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.091 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.350 { 00:17:01.350 "cntlid": 129, 00:17:01.350 "qid": 0, 00:17:01.350 "state": "enabled", 00:17:01.350 "thread": "nvmf_tgt_poll_group_000", 00:17:01.350 "listen_address": { 00:17:01.350 "trtype": "TCP", 00:17:01.350 "adrfam": "IPv4", 00:17:01.350 "traddr": "10.0.0.2", 00:17:01.350 "trsvcid": "4420" 00:17:01.350 }, 00:17:01.350 "peer_address": { 00:17:01.350 "trtype": "TCP", 00:17:01.350 "adrfam": "IPv4", 00:17:01.350 "traddr": "10.0.0.1", 00:17:01.350 "trsvcid": "37902" 00:17:01.350 }, 00:17:01.350 "auth": { 00:17:01.350 "state": "completed", 00:17:01.350 "digest": "sha512", 00:17:01.350 "dhgroup": "ffdhe6144" 00:17:01.350 } 00:17:01.350 } 00:17:01.350 ]' 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.350 10:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.607 10:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:17:02.544 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.544 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.544 10:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.544 10:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.544 10:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.544 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.544 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.544 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.802 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.371 00:17:03.371 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.371 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.371 10:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.629 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.629 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.629 10:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.629 10:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.629 10:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.629 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.629 { 00:17:03.629 "cntlid": 131, 00:17:03.629 "qid": 0, 00:17:03.629 "state": "enabled", 00:17:03.629 "thread": "nvmf_tgt_poll_group_000", 00:17:03.629 "listen_address": { 00:17:03.629 "trtype": "TCP", 00:17:03.629 "adrfam": "IPv4", 00:17:03.629 "traddr": "10.0.0.2", 00:17:03.629 "trsvcid": "4420" 00:17:03.629 }, 00:17:03.629 "peer_address": { 00:17:03.629 "trtype": "TCP", 00:17:03.629 "adrfam": "IPv4", 00:17:03.629 "traddr": "10.0.0.1", 00:17:03.629 "trsvcid": "37928" 00:17:03.629 }, 00:17:03.629 "auth": { 00:17:03.629 "state": "completed", 00:17:03.629 "digest": "sha512", 00:17:03.629 "dhgroup": "ffdhe6144" 00:17:03.629 } 00:17:03.629 } 00:17:03.629 ]' 00:17:03.629 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.887 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.887 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.887 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.887 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.887 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.887 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.887 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.144 10:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:17:05.080 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.080 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.080 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.080 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.080 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.080 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.080 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.080 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.336 10:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.901 00:17:05.901 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.901 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.901 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.160 { 00:17:06.160 "cntlid": 133, 00:17:06.160 "qid": 0, 00:17:06.160 "state": "enabled", 00:17:06.160 "thread": "nvmf_tgt_poll_group_000", 00:17:06.160 "listen_address": { 00:17:06.160 "trtype": "TCP", 00:17:06.160 "adrfam": "IPv4", 00:17:06.160 "traddr": "10.0.0.2", 00:17:06.160 "trsvcid": "4420" 00:17:06.160 }, 00:17:06.160 "peer_address": { 00:17:06.160 "trtype": "TCP", 00:17:06.160 "adrfam": "IPv4", 00:17:06.160 "traddr": "10.0.0.1", 00:17:06.160 "trsvcid": "37944" 00:17:06.160 }, 00:17:06.160 "auth": { 00:17:06.160 "state": "completed", 00:17:06.160 "digest": "sha512", 00:17:06.160 "dhgroup": "ffdhe6144" 00:17:06.160 } 00:17:06.160 } 00:17:06.160 ]' 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.160 10:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.418 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:17:07.351 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.351 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.351 10:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.351 10:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.351 10:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.351 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.351 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.351 10:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.609 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.176 00:17:08.176 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.176 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.176 10:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.433 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.433 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.433 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.433 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.434 10:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.434 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.434 { 00:17:08.434 "cntlid": 135, 00:17:08.434 "qid": 0, 00:17:08.434 "state": "enabled", 00:17:08.434 "thread": "nvmf_tgt_poll_group_000", 00:17:08.434 "listen_address": { 00:17:08.434 "trtype": "TCP", 00:17:08.434 "adrfam": "IPv4", 00:17:08.434 "traddr": "10.0.0.2", 00:17:08.434 "trsvcid": "4420" 00:17:08.434 }, 00:17:08.434 "peer_address": { 00:17:08.434 "trtype": "TCP", 00:17:08.434 "adrfam": "IPv4", 00:17:08.434 "traddr": "10.0.0.1", 00:17:08.434 "trsvcid": "60414" 00:17:08.434 }, 00:17:08.434 "auth": { 00:17:08.434 "state": "completed", 00:17:08.434 "digest": "sha512", 00:17:08.434 "dhgroup": "ffdhe6144" 00:17:08.434 } 00:17:08.434 } 00:17:08.434 ]' 00:17:08.434 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.709 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.709 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.709 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.709 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.709 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.709 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.709 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.970 10:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:17:09.917 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.917 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.917 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.917 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.917 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.917 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.917 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.917 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.917 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.175 10:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.108 00:17:11.108 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.108 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.108 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.365 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.365 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.365 10:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.365 10:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.365 10:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.365 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.365 { 00:17:11.365 "cntlid": 137, 00:17:11.365 "qid": 0, 00:17:11.365 "state": "enabled", 00:17:11.365 "thread": "nvmf_tgt_poll_group_000", 00:17:11.365 "listen_address": { 00:17:11.365 "trtype": "TCP", 00:17:11.365 "adrfam": "IPv4", 00:17:11.365 "traddr": "10.0.0.2", 00:17:11.365 "trsvcid": "4420" 00:17:11.365 }, 00:17:11.365 "peer_address": { 00:17:11.365 "trtype": "TCP", 00:17:11.365 "adrfam": "IPv4", 00:17:11.365 "traddr": "10.0.0.1", 00:17:11.365 "trsvcid": "60440" 00:17:11.365 }, 00:17:11.365 "auth": { 00:17:11.365 "state": "completed", 00:17:11.365 "digest": "sha512", 00:17:11.365 "dhgroup": "ffdhe8192" 00:17:11.365 } 00:17:11.365 } 00:17:11.365 ]' 00:17:11.365 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.365 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.366 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.366 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.366 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.366 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.366 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.366 10:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.625 10:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.001 10:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.938 00:17:13.938 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.938 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.938 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.195 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.195 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.195 10:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.195 10:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.195 10:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.195 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.195 { 00:17:14.195 "cntlid": 139, 00:17:14.195 "qid": 0, 00:17:14.195 "state": "enabled", 00:17:14.195 "thread": "nvmf_tgt_poll_group_000", 00:17:14.195 "listen_address": { 00:17:14.195 "trtype": "TCP", 00:17:14.195 "adrfam": "IPv4", 00:17:14.195 "traddr": "10.0.0.2", 00:17:14.195 "trsvcid": "4420" 00:17:14.195 }, 00:17:14.195 "peer_address": { 00:17:14.195 "trtype": "TCP", 00:17:14.195 "adrfam": "IPv4", 00:17:14.196 "traddr": "10.0.0.1", 00:17:14.196 "trsvcid": "60460" 00:17:14.196 }, 00:17:14.196 "auth": { 00:17:14.196 "state": "completed", 00:17:14.196 "digest": "sha512", 00:17:14.196 "dhgroup": "ffdhe8192" 00:17:14.196 } 00:17:14.196 } 00:17:14.196 ]' 00:17:14.196 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.196 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.196 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.196 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.196 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.196 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.196 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.196 10:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.452 10:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NzMwMWU0YzY5Y2E3YWU4OGFkNmRmNDc0YmE5MmQ1NjJYVoAo: --dhchap-ctrl-secret DHHC-1:02:Njk3MjdhMzExZGNiY2RlM2RmY2JmZjQxOGM1NjQ3Yzg0MDIxNGQzMjZiZjE5OTBm27PCqg==: 00:17:15.414 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.414 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.414 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.414 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.414 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.414 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.414 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.414 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.672 10:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.605 00:17:16.605 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.605 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.605 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.863 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.863 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.863 10:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.863 10:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.863 10:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.863 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.863 { 00:17:16.863 "cntlid": 141, 00:17:16.863 "qid": 0, 00:17:16.863 "state": "enabled", 00:17:16.863 "thread": "nvmf_tgt_poll_group_000", 00:17:16.863 "listen_address": { 00:17:16.863 "trtype": "TCP", 00:17:16.863 "adrfam": "IPv4", 00:17:16.863 "traddr": "10.0.0.2", 00:17:16.863 "trsvcid": "4420" 00:17:16.863 }, 00:17:16.863 "peer_address": { 00:17:16.863 "trtype": "TCP", 00:17:16.863 "adrfam": "IPv4", 00:17:16.863 "traddr": "10.0.0.1", 00:17:16.863 "trsvcid": "60490" 00:17:16.863 }, 00:17:16.863 "auth": { 00:17:16.863 "state": "completed", 00:17:16.863 "digest": "sha512", 00:17:16.863 "dhgroup": "ffdhe8192" 00:17:16.863 } 00:17:16.863 } 00:17:16.863 ]' 00:17:16.863 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.863 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.863 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.120 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.120 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.120 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.120 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.120 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.377 10:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzE4ZTBiMWFjNDI1MzI5NGQ2YjVjYzY4M2NiOTBiZmNjYzA4ZTU2MDUwOTVlNzc04jMeTA==: --dhchap-ctrl-secret DHHC-1:01:YWYyZTdjMjBkMTcxYzk1MmQyMThkMTQ3YTU2MTNkZjW2/b38: 00:17:18.309 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.309 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.309 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.309 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.309 10:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.309 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.309 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.309 10:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.567 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.499 00:17:19.500 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.500 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.500 10:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.757 { 00:17:19.757 "cntlid": 143, 00:17:19.757 "qid": 0, 00:17:19.757 "state": "enabled", 00:17:19.757 "thread": "nvmf_tgt_poll_group_000", 00:17:19.757 "listen_address": { 00:17:19.757 "trtype": "TCP", 00:17:19.757 "adrfam": "IPv4", 00:17:19.757 "traddr": "10.0.0.2", 00:17:19.757 "trsvcid": "4420" 00:17:19.757 }, 00:17:19.757 "peer_address": { 00:17:19.757 "trtype": "TCP", 00:17:19.757 "adrfam": "IPv4", 00:17:19.757 "traddr": "10.0.0.1", 00:17:19.757 "trsvcid": "38772" 00:17:19.757 }, 00:17:19.757 "auth": { 00:17:19.757 "state": "completed", 00:17:19.757 "digest": "sha512", 00:17:19.757 "dhgroup": "ffdhe8192" 00:17:19.757 } 00:17:19.757 } 00:17:19.757 ]' 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.757 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.017 10:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:20.951 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:21.520 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:21.520 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.520 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:21.520 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:21.520 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.520 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.520 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.520 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.520 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.521 10:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.521 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.521 10:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.092 00:17:22.352 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.352 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.352 10:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.611 { 00:17:22.611 "cntlid": 145, 00:17:22.611 "qid": 0, 00:17:22.611 "state": "enabled", 00:17:22.611 "thread": "nvmf_tgt_poll_group_000", 00:17:22.611 "listen_address": { 00:17:22.611 "trtype": "TCP", 00:17:22.611 "adrfam": "IPv4", 00:17:22.611 "traddr": "10.0.0.2", 00:17:22.611 "trsvcid": "4420" 00:17:22.611 }, 00:17:22.611 "peer_address": { 00:17:22.611 "trtype": "TCP", 00:17:22.611 "adrfam": "IPv4", 00:17:22.611 "traddr": "10.0.0.1", 00:17:22.611 "trsvcid": "38802" 00:17:22.611 }, 00:17:22.611 "auth": { 00:17:22.611 "state": "completed", 00:17:22.611 "digest": "sha512", 00:17:22.611 "dhgroup": "ffdhe8192" 00:17:22.611 } 00:17:22.611 } 00:17:22.611 ]' 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.611 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.869 10:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NjY0MDlhMzJmZDQzODI3ZWYzMGExMmFhNmZiMjczYTk5YmQzYWMyYTFhYTA1MDBmsamuOA==: --dhchap-ctrl-secret DHHC-1:03:MWI0OGI0MWY5OGNjOTg2MzRkMDQyYjQ1ZWM2OGQzYjFiNDhiNDAzNjIwNDhhNzM1MWRkZmM3NDc4NGVmNTM1MOQ+LLo=: 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:23.806 10:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:24.741 request: 00:17:24.741 { 00:17:24.741 "name": "nvme0", 00:17:24.741 "trtype": "tcp", 00:17:24.741 "traddr": "10.0.0.2", 00:17:24.741 "adrfam": "ipv4", 00:17:24.741 "trsvcid": "4420", 00:17:24.741 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:24.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:24.741 "prchk_reftag": false, 00:17:24.741 "prchk_guard": false, 00:17:24.741 "hdgst": false, 00:17:24.741 "ddgst": false, 00:17:24.741 "dhchap_key": "key2", 00:17:24.741 "method": "bdev_nvme_attach_controller", 00:17:24.741 "req_id": 1 00:17:24.741 } 00:17:24.741 Got JSON-RPC error response 00:17:24.741 response: 00:17:24.741 { 00:17:24.741 "code": -5, 00:17:24.741 "message": "Input/output error" 00:17:24.741 } 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:24.741 10:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:25.682 request: 00:17:25.682 { 00:17:25.682 "name": "nvme0", 00:17:25.682 "trtype": "tcp", 00:17:25.682 "traddr": "10.0.0.2", 00:17:25.682 "adrfam": "ipv4", 00:17:25.682 "trsvcid": "4420", 00:17:25.682 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:25.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:25.682 "prchk_reftag": false, 00:17:25.682 "prchk_guard": false, 00:17:25.682 "hdgst": false, 00:17:25.682 "ddgst": false, 00:17:25.682 "dhchap_key": "key1", 00:17:25.682 "dhchap_ctrlr_key": "ckey2", 00:17:25.682 "method": "bdev_nvme_attach_controller", 00:17:25.682 "req_id": 1 00:17:25.682 } 00:17:25.682 Got JSON-RPC error response 00:17:25.682 response: 00:17:25.682 { 00:17:25.682 "code": -5, 00:17:25.682 "message": "Input/output error" 00:17:25.682 } 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.682 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.618 request: 00:17:26.618 { 00:17:26.618 "name": "nvme0", 00:17:26.618 "trtype": "tcp", 00:17:26.618 "traddr": "10.0.0.2", 00:17:26.618 "adrfam": "ipv4", 00:17:26.618 "trsvcid": "4420", 00:17:26.618 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:26.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:26.618 "prchk_reftag": false, 00:17:26.618 "prchk_guard": false, 00:17:26.618 "hdgst": false, 00:17:26.618 "ddgst": false, 00:17:26.618 "dhchap_key": "key1", 00:17:26.618 "dhchap_ctrlr_key": "ckey1", 00:17:26.618 "method": "bdev_nvme_attach_controller", 00:17:26.618 "req_id": 1 00:17:26.618 } 00:17:26.618 Got JSON-RPC error response 00:17:26.618 response: 00:17:26.618 { 00:17:26.618 "code": -5, 00:17:26.618 "message": "Input/output error" 00:17:26.618 } 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2301834 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2301834 ']' 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2301834 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.618 10:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2301834 00:17:26.618 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:26.618 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:26.618 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2301834' 00:17:26.618 killing process with pid 2301834 00:17:26.618 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2301834 00:17:26.618 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2301834 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2324628 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2324628 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2324628 ']' 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.877 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2324628 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2324628 ']' 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.135 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.393 10:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.332 00:17:28.332 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.332 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.332 10:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.591 { 00:17:28.591 "cntlid": 1, 00:17:28.591 "qid": 0, 00:17:28.591 "state": "enabled", 00:17:28.591 "thread": "nvmf_tgt_poll_group_000", 00:17:28.591 "listen_address": { 00:17:28.591 "trtype": "TCP", 00:17:28.591 "adrfam": "IPv4", 00:17:28.591 "traddr": "10.0.0.2", 00:17:28.591 "trsvcid": "4420" 00:17:28.591 }, 00:17:28.591 "peer_address": { 00:17:28.591 "trtype": "TCP", 00:17:28.591 "adrfam": "IPv4", 00:17:28.591 "traddr": "10.0.0.1", 00:17:28.591 "trsvcid": "40050" 00:17:28.591 }, 00:17:28.591 "auth": { 00:17:28.591 "state": "completed", 00:17:28.591 "digest": "sha512", 00:17:28.591 "dhgroup": "ffdhe8192" 00:17:28.591 } 00:17:28.591 } 00:17:28.591 ]' 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.591 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.848 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.848 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.848 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.108 10:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGVkYjYyMjI4MGU2NWYxZWIxZjQwY2M1ODVmNDI5ZDMwZTU0ODMyYzY3Y2FlMDRjMjU1OTRkNmJkYjExMjc2YxCTURk=: 00:17:30.044 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.044 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:30.044 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.044 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.044 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.044 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:30.044 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.044 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.044 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.044 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:30.045 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:30.303 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.303 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:30.303 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.303 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:30.303 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.303 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:30.303 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.303 10:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.303 10:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.560 request: 00:17:30.560 { 00:17:30.560 "name": "nvme0", 00:17:30.560 "trtype": "tcp", 00:17:30.560 "traddr": "10.0.0.2", 00:17:30.560 "adrfam": "ipv4", 00:17:30.560 "trsvcid": "4420", 00:17:30.560 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:30.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:30.560 "prchk_reftag": false, 00:17:30.560 "prchk_guard": false, 00:17:30.560 "hdgst": false, 00:17:30.560 "ddgst": false, 00:17:30.560 "dhchap_key": "key3", 00:17:30.560 "method": "bdev_nvme_attach_controller", 00:17:30.560 "req_id": 1 00:17:30.560 } 00:17:30.560 Got JSON-RPC error response 00:17:30.560 response: 00:17:30.560 { 00:17:30.560 "code": -5, 00:17:30.560 "message": "Input/output error" 00:17:30.560 } 00:17:30.560 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:30.560 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:30.560 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:30.561 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:30.561 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:30.561 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:30.561 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:30.561 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:30.841 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.841 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:30.841 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.841 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:30.841 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.841 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:30.841 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.841 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.841 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.141 request: 00:17:31.141 { 00:17:31.141 "name": "nvme0", 00:17:31.141 "trtype": "tcp", 00:17:31.141 "traddr": "10.0.0.2", 00:17:31.141 "adrfam": "ipv4", 00:17:31.141 "trsvcid": "4420", 00:17:31.141 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:31.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:31.141 "prchk_reftag": false, 00:17:31.141 "prchk_guard": false, 00:17:31.141 "hdgst": false, 00:17:31.141 "ddgst": false, 00:17:31.141 "dhchap_key": "key3", 00:17:31.141 "method": "bdev_nvme_attach_controller", 00:17:31.141 "req_id": 1 00:17:31.141 } 00:17:31.141 Got JSON-RPC error response 00:17:31.141 response: 00:17:31.141 { 00:17:31.141 "code": -5, 00:17:31.141 "message": "Input/output error" 00:17:31.141 } 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.141 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.401 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.401 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:31.401 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:31.401 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:31.401 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:31.401 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.401 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:31.401 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.401 10:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:31.401 10:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:31.401 request: 00:17:31.401 { 00:17:31.401 "name": "nvme0", 00:17:31.401 "trtype": "tcp", 00:17:31.401 "traddr": "10.0.0.2", 00:17:31.401 "adrfam": "ipv4", 00:17:31.401 "trsvcid": "4420", 00:17:31.401 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:31.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:31.401 "prchk_reftag": false, 00:17:31.401 "prchk_guard": false, 00:17:31.401 "hdgst": false, 00:17:31.401 "ddgst": false, 00:17:31.401 "dhchap_key": "key0", 00:17:31.401 "dhchap_ctrlr_key": "key1", 00:17:31.401 "method": "bdev_nvme_attach_controller", 00:17:31.401 "req_id": 1 00:17:31.401 } 00:17:31.401 Got JSON-RPC error response 00:17:31.401 response: 00:17:31.401 { 00:17:31.401 "code": -5, 00:17:31.401 "message": "Input/output error" 00:17:31.401 } 00:17:31.660 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:31.660 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:31.660 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:31.660 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:31.660 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:31.660 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:31.918 00:17:31.918 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:31.918 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.918 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:32.176 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.176 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.176 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2301992 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2301992 ']' 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2301992 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2301992 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2301992' 00:17:32.435 killing process with pid 2301992 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2301992 00:17:32.435 10:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2301992 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:33.004 rmmod nvme_tcp 00:17:33.004 rmmod nvme_fabrics 00:17:33.004 rmmod nvme_keyring 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2324628 ']' 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2324628 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2324628 ']' 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2324628 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2324628 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2324628' 00:17:33.004 killing process with pid 2324628 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2324628 00:17:33.004 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2324628 00:17:33.262 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:33.262 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:33.262 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:33.262 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.262 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.262 10:30:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.262 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.262 10:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.170 10:30:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.170 10:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.nFc /tmp/spdk.key-sha256.2yN /tmp/spdk.key-sha384.Ui1 /tmp/spdk.key-sha512.SIb /tmp/spdk.key-sha512.nzF /tmp/spdk.key-sha384.Bcl /tmp/spdk.key-sha256.vm3 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:35.170 00:17:35.170 real 3m10.786s 00:17:35.170 user 7m24.072s 00:17:35.170 sys 0m24.868s 00:17:35.170 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.170 10:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 ************************************ 00:17:35.170 END TEST nvmf_auth_target 00:17:35.170 ************************************ 00:17:35.170 10:30:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:35.170 10:30:29 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:35.170 10:30:29 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:35.170 10:30:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:35.170 10:30:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.170 10:30:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 ************************************ 00:17:35.170 START TEST nvmf_bdevio_no_huge 00:17:35.170 ************************************ 00:17:35.170 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:35.429 * Looking for test storage... 00:17:35.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.429 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.430 10:30:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:37.334 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:37.334 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:37.334 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.334 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:37.335 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:37.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:17:37.335 00:17:37.335 --- 10.0.0.2 ping statistics --- 00:17:37.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.335 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:17:37.335 00:17:37.335 --- 10.0.0.1 ping statistics --- 00:17:37.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.335 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2327382 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2327382 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2327382 ']' 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.335 10:30:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.335 [2024-07-15 10:30:31.920454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:37.335 [2024-07-15 10:30:31.920554] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:37.595 [2024-07-15 10:30:31.994794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.595 [2024-07-15 10:30:32.112351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.595 [2024-07-15 10:30:32.112410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.595 [2024-07-15 10:30:32.112436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.595 [2024-07-15 10:30:32.112449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.595 [2024-07-15 10:30:32.112461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.595 [2024-07-15 10:30:32.112572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:37.595 [2024-07-15 10:30:32.112653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:37.595 [2024-07-15 10:30:32.112755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:37.595 [2024-07-15 10:30:32.112762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.536 [2024-07-15 10:30:32.904183] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.536 Malloc0 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.536 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.537 [2024-07-15 10:30:32.941787] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:38.537 { 00:17:38.537 "params": { 00:17:38.537 "name": "Nvme$subsystem", 00:17:38.537 "trtype": "$TEST_TRANSPORT", 00:17:38.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:38.537 "adrfam": "ipv4", 00:17:38.537 "trsvcid": "$NVMF_PORT", 00:17:38.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:38.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:38.537 "hdgst": ${hdgst:-false}, 00:17:38.537 "ddgst": ${ddgst:-false} 00:17:38.537 }, 00:17:38.537 "method": "bdev_nvme_attach_controller" 00:17:38.537 } 00:17:38.537 EOF 00:17:38.537 )") 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:38.537 10:30:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:38.537 "params": { 00:17:38.537 "name": "Nvme1", 00:17:38.537 "trtype": "tcp", 00:17:38.537 "traddr": "10.0.0.2", 00:17:38.537 "adrfam": "ipv4", 00:17:38.537 "trsvcid": "4420", 00:17:38.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.537 "hdgst": false, 00:17:38.537 "ddgst": false 00:17:38.537 }, 00:17:38.537 "method": "bdev_nvme_attach_controller" 00:17:38.537 }' 00:17:38.537 [2024-07-15 10:30:32.986020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:38.537 [2024-07-15 10:30:32.986099] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2327537 ] 00:17:38.537 [2024-07-15 10:30:33.050246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:38.537 [2024-07-15 10:30:33.164757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.537 [2024-07-15 10:30:33.164805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.537 [2024-07-15 10:30:33.164808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.797 I/O targets: 00:17:38.797 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:38.797 00:17:38.797 00:17:38.797 CUnit - A unit testing framework for C - Version 2.1-3 00:17:38.797 http://cunit.sourceforge.net/ 00:17:38.797 00:17:38.797 00:17:38.797 Suite: bdevio tests on: Nvme1n1 00:17:38.797 Test: blockdev write read block ...passed 00:17:38.797 Test: blockdev write zeroes read block ...passed 00:17:38.797 Test: blockdev write zeroes read no split ...passed 00:17:39.057 Test: blockdev write zeroes read split ...passed 00:17:39.057 Test: blockdev write zeroes read split partial ...passed 00:17:39.057 Test: blockdev reset ...[2024-07-15 10:30:33.535355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:39.057 [2024-07-15 10:30:33.535474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1376fb0 (9): Bad file descriptor 00:17:39.057 [2024-07-15 10:30:33.546567] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:39.057 passed 00:17:39.057 Test: blockdev write read 8 blocks ...passed 00:17:39.057 Test: blockdev write read size > 128k ...passed 00:17:39.057 Test: blockdev write read invalid size ...passed 00:17:39.057 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:39.057 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:39.057 Test: blockdev write read max offset ...passed 00:17:39.057 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:39.057 Test: blockdev writev readv 8 blocks ...passed 00:17:39.057 Test: blockdev writev readv 30 x 1block ...passed 00:17:39.317 Test: blockdev writev readv block ...passed 00:17:39.317 Test: blockdev writev readv size > 128k ...passed 00:17:39.317 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:39.317 Test: blockdev comparev and writev ...[2024-07-15 10:30:33.720043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:39.317 [2024-07-15 10:30:33.720079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.317 [2024-07-15 10:30:33.720104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:39.317 [2024-07-15 10:30:33.720120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.317 [2024-07-15 10:30:33.720519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:39.317 [2024-07-15 10:30:33.720543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:39.317 [2024-07-15 10:30:33.720565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:39.317 [2024-07-15 10:30:33.720582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:39.317 [2024-07-15 10:30:33.720973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:39.317 [2024-07-15 10:30:33.720998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:39.317 [2024-07-15 10:30:33.721021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:39.317 [2024-07-15 10:30:33.721038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:39.317 [2024-07-15 10:30:33.721416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:39.317 [2024-07-15 10:30:33.721441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:39.317 [2024-07-15 10:30:33.721463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:39.317 [2024-07-15 10:30:33.721480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:39.317 passed 00:17:39.317 Test: blockdev nvme passthru rw ...passed 00:17:39.317 Test: blockdev nvme passthru vendor specific ...[2024-07-15 10:30:33.803223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:39.317 [2024-07-15 10:30:33.803250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:39.317 [2024-07-15 10:30:33.803423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:39.317 [2024-07-15 10:30:33.803446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:39.317 [2024-07-15 10:30:33.803615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:39.317 [2024-07-15 10:30:33.803639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:39.317 [2024-07-15 10:30:33.803815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:39.317 [2024-07-15 10:30:33.803838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:39.317 passed 00:17:39.317 Test: blockdev nvme admin passthru ...passed 00:17:39.317 Test: blockdev copy ...passed 00:17:39.317 00:17:39.317 Run Summary: Type Total Ran Passed Failed Inactive 00:17:39.317 suites 1 1 n/a 0 0 00:17:39.317 tests 23 23 23 0 0 00:17:39.317 asserts 152 152 152 0 n/a 00:17:39.317 00:17:39.317 Elapsed time = 1.058 seconds 00:17:39.576 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.576 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.576 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.834 rmmod nvme_tcp 00:17:39.834 rmmod nvme_fabrics 00:17:39.834 rmmod nvme_keyring 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2327382 ']' 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2327382 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2327382 ']' 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2327382 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2327382 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2327382' 00:17:39.834 killing process with pid 2327382 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2327382 00:17:39.834 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2327382 00:17:40.093 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.093 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.093 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.093 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.093 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.093 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.093 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.093 10:30:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.631 10:30:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:42.631 00:17:42.631 real 0m6.954s 00:17:42.631 user 0m12.847s 00:17:42.631 sys 0m2.411s 00:17:42.631 10:30:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:42.631 10:30:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.631 ************************************ 00:17:42.631 END TEST nvmf_bdevio_no_huge 00:17:42.631 ************************************ 00:17:42.631 10:30:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:42.631 10:30:36 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:42.631 10:30:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:42.631 10:30:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.631 10:30:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:42.631 ************************************ 00:17:42.631 START TEST nvmf_tls 00:17:42.631 ************************************ 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:42.631 * Looking for test storage... 00:17:42.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.631 10:30:36 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:42.632 10:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:44.534 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:44.534 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:44.534 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:44.534 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:44.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:17:44.534 00:17:44.534 --- 10.0.0.2 ping statistics --- 00:17:44.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.534 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:17:44.534 10:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:17:44.534 00:17:44.534 --- 10.0.0.1 ping statistics --- 00:17:44.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.535 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2329607 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2329607 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2329607 ']' 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.535 10:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.535 [2024-07-15 10:30:39.081423] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:44.535 [2024-07-15 10:30:39.081487] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.535 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.535 [2024-07-15 10:30:39.149175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.793 [2024-07-15 10:30:39.267640] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.793 [2024-07-15 10:30:39.267687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.793 [2024-07-15 10:30:39.267703] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.793 [2024-07-15 10:30:39.267717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.793 [2024-07-15 10:30:39.267728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.793 [2024-07-15 10:30:39.267757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.725 10:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.725 10:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:45.725 10:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:45.725 10:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.725 10:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.725 10:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.725 10:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:45.725 10:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:45.725 true 00:17:45.725 10:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.725 10:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:45.983 10:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:45.983 10:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:45.983 10:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:46.241 10:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.241 10:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:46.497 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:46.497 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:46.497 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:46.753 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.753 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:47.011 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:47.011 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:47.011 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.011 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:47.270 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:47.270 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:47.270 10:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:47.529 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.529 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:47.823 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:47.823 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:47.823 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:48.080 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.080 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.TQBXgr3MjH 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.GCKoofI4BP 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.TQBXgr3MjH 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GCKoofI4BP 00:17:48.338 10:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:48.595 10:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:49.162 10:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.TQBXgr3MjH 00:17:49.162 10:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.TQBXgr3MjH 00:17:49.163 10:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:49.163 [2024-07-15 10:30:43.806396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.421 10:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:49.421 10:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:49.679 [2024-07-15 10:30:44.299741] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:49.679 [2024-07-15 10:30:44.300051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.679 10:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:50.244 malloc0 00:17:50.244 10:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:50.244 10:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TQBXgr3MjH 00:17:50.503 [2024-07-15 10:30:45.129938] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:50.503 10:30:45 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TQBXgr3MjH 00:17:50.762 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.744 Initializing NVMe Controllers 00:18:00.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:00.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:00.744 Initialization complete. Launching workers. 00:18:00.744 ======================================================== 00:18:00.744 Latency(us) 00:18:00.744 Device Information : IOPS MiB/s Average min max 00:18:00.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7521.38 29.38 8511.98 1366.13 9435.40 00:18:00.744 ======================================================== 00:18:00.744 Total : 7521.38 29.38 8511.98 1366.13 9435.40 00:18:00.744 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TQBXgr3MjH 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TQBXgr3MjH' 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2331510 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2331510 /var/tmp/bdevperf.sock 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2331510 ']' 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:00.744 10:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.745 10:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.745 10:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.745 10:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.745 [2024-07-15 10:30:55.304213] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:00.745 [2024-07-15 10:30:55.304304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331510 ] 00:18:00.745 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.745 [2024-07-15 10:30:55.362044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.003 [2024-07-15 10:30:55.471837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.003 10:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.003 10:30:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:01.003 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TQBXgr3MjH 00:18:01.262 [2024-07-15 10:30:55.855834] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.262 [2024-07-15 10:30:55.855979] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:01.519 TLSTESTn1 00:18:01.519 10:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:01.520 Running I/O for 10 seconds... 00:18:11.512 00:18:11.512 Latency(us) 00:18:11.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.512 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:11.512 Verification LBA range: start 0x0 length 0x2000 00:18:11.512 TLSTESTn1 : 10.04 3105.79 12.13 0.00 0.00 41111.48 5849.69 70293.43 00:18:11.512 =================================================================================================================== 00:18:11.512 Total : 3105.79 12.13 0.00 0.00 41111.48 5849.69 70293.43 00:18:11.512 0 00:18:11.512 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.512 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2331510 00:18:11.512 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2331510 ']' 00:18:11.512 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2331510 00:18:11.512 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:11.512 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:11.512 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2331510 00:18:11.770 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:11.770 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:11.770 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2331510' 00:18:11.770 killing process with pid 2331510 00:18:11.770 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2331510 00:18:11.770 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.770 00:18:11.770 Latency(us) 00:18:11.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.770 =================================================================================================================== 00:18:11.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.770 [2024-07-15 10:31:06.179087] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:11.770 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2331510 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GCKoofI4BP 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GCKoofI4BP 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GCKoofI4BP 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GCKoofI4BP' 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2332820 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2332820 /var/tmp/bdevperf.sock 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2332820 ']' 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.027 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.027 [2024-07-15 10:31:06.500541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:12.027 [2024-07-15 10:31:06.500635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332820 ] 00:18:12.027 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.027 [2024-07-15 10:31:06.558458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.027 [2024-07-15 10:31:06.662942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.285 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.285 10:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:12.285 10:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GCKoofI4BP 00:18:12.544 [2024-07-15 10:31:07.013009] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.544 [2024-07-15 10:31:07.013142] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:12.544 [2024-07-15 10:31:07.023684] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:12.544 [2024-07-15 10:31:07.024088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae2f90 (107): Transport endpoint is not connected 00:18:12.544 [2024-07-15 10:31:07.025079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae2f90 (9): Bad file descriptor 00:18:12.544 [2024-07-15 10:31:07.026078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:12.544 [2024-07-15 10:31:07.026100] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:12.544 [2024-07-15 10:31:07.026118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:12.544 request: 00:18:12.544 { 00:18:12.544 "name": "TLSTEST", 00:18:12.544 "trtype": "tcp", 00:18:12.544 "traddr": "10.0.0.2", 00:18:12.544 "adrfam": "ipv4", 00:18:12.544 "trsvcid": "4420", 00:18:12.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.544 "prchk_reftag": false, 00:18:12.544 "prchk_guard": false, 00:18:12.544 "hdgst": false, 00:18:12.544 "ddgst": false, 00:18:12.544 "psk": "/tmp/tmp.GCKoofI4BP", 00:18:12.544 "method": "bdev_nvme_attach_controller", 00:18:12.544 "req_id": 1 00:18:12.544 } 00:18:12.544 Got JSON-RPC error response 00:18:12.544 response: 00:18:12.544 { 00:18:12.544 "code": -5, 00:18:12.544 "message": "Input/output error" 00:18:12.544 } 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2332820 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2332820 ']' 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2332820 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2332820 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2332820' 00:18:12.544 killing process with pid 2332820 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2332820 00:18:12.544 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.544 00:18:12.544 Latency(us) 00:18:12.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.544 =================================================================================================================== 00:18:12.544 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.544 [2024-07-15 10:31:07.075554] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:12.544 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2332820 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TQBXgr3MjH 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TQBXgr3MjH 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TQBXgr3MjH 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TQBXgr3MjH' 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2332959 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2332959 /var/tmp/bdevperf.sock 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2332959 ']' 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.802 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.802 [2024-07-15 10:31:07.385689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:12.802 [2024-07-15 10:31:07.385777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332959 ] 00:18:12.802 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.802 [2024-07-15 10:31:07.443348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.059 [2024-07-15 10:31:07.548500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.059 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.059 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:13.059 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.TQBXgr3MjH 00:18:13.318 [2024-07-15 10:31:07.897215] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.318 [2024-07-15 10:31:07.897338] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:13.318 [2024-07-15 10:31:07.905828] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:13.318 [2024-07-15 10:31:07.905857] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:13.318 [2024-07-15 10:31:07.905917] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:13.318 [2024-07-15 10:31:07.906125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1402f90 (107): Transport endpoint is not connected 00:18:13.318 [2024-07-15 10:31:07.907114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1402f90 (9): Bad file descriptor 00:18:13.318 [2024-07-15 10:31:07.908114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:13.318 [2024-07-15 10:31:07.908135] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:13.318 [2024-07-15 10:31:07.908153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:13.318 request: 00:18:13.318 { 00:18:13.318 "name": "TLSTEST", 00:18:13.318 "trtype": "tcp", 00:18:13.318 "traddr": "10.0.0.2", 00:18:13.318 "adrfam": "ipv4", 00:18:13.318 "trsvcid": "4420", 00:18:13.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.318 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:13.318 "prchk_reftag": false, 00:18:13.318 "prchk_guard": false, 00:18:13.318 "hdgst": false, 00:18:13.318 "ddgst": false, 00:18:13.318 "psk": "/tmp/tmp.TQBXgr3MjH", 00:18:13.318 "method": "bdev_nvme_attach_controller", 00:18:13.318 "req_id": 1 00:18:13.318 } 00:18:13.318 Got JSON-RPC error response 00:18:13.318 response: 00:18:13.318 { 00:18:13.318 "code": -5, 00:18:13.318 "message": "Input/output error" 00:18:13.318 } 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2332959 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2332959 ']' 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2332959 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2332959 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2332959' 00:18:13.318 killing process with pid 2332959 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2332959 00:18:13.318 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.318 00:18:13.318 Latency(us) 00:18:13.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.318 =================================================================================================================== 00:18:13.318 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.318 [2024-07-15 10:31:07.956942] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:13.318 10:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2332959 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TQBXgr3MjH 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TQBXgr3MjH 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TQBXgr3MjH 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TQBXgr3MjH' 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2333095 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2333095 /var/tmp/bdevperf.sock 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2333095 ']' 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.577 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.837 [2024-07-15 10:31:08.257872] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:13.837 [2024-07-15 10:31:08.257974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333095 ] 00:18:13.837 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.837 [2024-07-15 10:31:08.315365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.837 [2024-07-15 10:31:08.418558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.095 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.095 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:14.095 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TQBXgr3MjH 00:18:14.355 [2024-07-15 10:31:08.808381] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.355 [2024-07-15 10:31:08.808510] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:14.355 [2024-07-15 10:31:08.818244] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:14.355 [2024-07-15 10:31:08.818274] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:14.355 [2024-07-15 10:31:08.818327] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:14.355 [2024-07-15 10:31:08.818431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118cf90 (107): Transport endpoint is not connected 00:18:14.355 [2024-07-15 10:31:08.819422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118cf90 (9): Bad file descriptor 00:18:14.355 [2024-07-15 10:31:08.820422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:14.355 [2024-07-15 10:31:08.820443] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:14.355 [2024-07-15 10:31:08.820461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:14.355 request: 00:18:14.355 { 00:18:14.355 "name": "TLSTEST", 00:18:14.355 "trtype": "tcp", 00:18:14.355 "traddr": "10.0.0.2", 00:18:14.355 "adrfam": "ipv4", 00:18:14.355 "trsvcid": "4420", 00:18:14.355 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:14.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.355 "prchk_reftag": false, 00:18:14.355 "prchk_guard": false, 00:18:14.355 "hdgst": false, 00:18:14.355 "ddgst": false, 00:18:14.355 "psk": "/tmp/tmp.TQBXgr3MjH", 00:18:14.355 "method": "bdev_nvme_attach_controller", 00:18:14.355 "req_id": 1 00:18:14.355 } 00:18:14.355 Got JSON-RPC error response 00:18:14.355 response: 00:18:14.355 { 00:18:14.355 "code": -5, 00:18:14.355 "message": "Input/output error" 00:18:14.355 } 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2333095 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2333095 ']' 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2333095 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2333095 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2333095' 00:18:14.355 killing process with pid 2333095 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2333095 00:18:14.355 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.355 00:18:14.355 Latency(us) 00:18:14.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.355 =================================================================================================================== 00:18:14.355 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.355 [2024-07-15 10:31:08.868461] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:14.355 10:31:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2333095 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2333122 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2333122 /var/tmp/bdevperf.sock 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2333122 ']' 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.614 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.614 [2024-07-15 10:31:09.173719] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:14.614 [2024-07-15 10:31:09.173811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333122 ] 00:18:14.614 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.614 [2024-07-15 10:31:09.233302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.872 [2024-07-15 10:31:09.342308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.872 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.872 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:14.872 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:15.130 [2024-07-15 10:31:09.740742] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:15.130 [2024-07-15 10:31:09.742968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc4770 (9): Bad file descriptor 00:18:15.130 [2024-07-15 10:31:09.743963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:15.130 [2024-07-15 10:31:09.743991] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:15.130 [2024-07-15 10:31:09.744010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:15.130 request: 00:18:15.130 { 00:18:15.130 "name": "TLSTEST", 00:18:15.130 "trtype": "tcp", 00:18:15.130 "traddr": "10.0.0.2", 00:18:15.130 "adrfam": "ipv4", 00:18:15.130 "trsvcid": "4420", 00:18:15.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.130 "prchk_reftag": false, 00:18:15.130 "prchk_guard": false, 00:18:15.130 "hdgst": false, 00:18:15.130 "ddgst": false, 00:18:15.130 "method": "bdev_nvme_attach_controller", 00:18:15.130 "req_id": 1 00:18:15.130 } 00:18:15.130 Got JSON-RPC error response 00:18:15.130 response: 00:18:15.130 { 00:18:15.130 "code": -5, 00:18:15.130 "message": "Input/output error" 00:18:15.130 } 00:18:15.130 10:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2333122 00:18:15.130 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2333122 ']' 00:18:15.130 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2333122 00:18:15.130 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:15.130 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.130 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2333122 00:18:15.390 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:15.390 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:15.390 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2333122' 00:18:15.390 killing process with pid 2333122 00:18:15.390 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2333122 00:18:15.390 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.390 00:18:15.390 Latency(us) 00:18:15.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.390 =================================================================================================================== 00:18:15.390 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.390 10:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2333122 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2329607 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2329607 ']' 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2329607 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2329607 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2329607' 00:18:15.651 killing process with pid 2329607 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2329607 00:18:15.651 [2024-07-15 10:31:10.085381] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:15.651 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2329607 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.AZAv4DftTx 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.AZAv4DftTx 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2333344 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2333344 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2333344 ']' 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.909 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.909 [2024-07-15 10:31:10.465628] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:15.910 [2024-07-15 10:31:10.465722] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.910 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.910 [2024-07-15 10:31:10.538711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.167 [2024-07-15 10:31:10.656874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.167 [2024-07-15 10:31:10.656953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.167 [2024-07-15 10:31:10.656968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.167 [2024-07-15 10:31:10.656980] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.167 [2024-07-15 10:31:10.656990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.167 [2024-07-15 10:31:10.657017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.167 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.167 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:16.167 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.167 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:16.167 10:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.167 10:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.167 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.AZAv4DftTx 00:18:16.167 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AZAv4DftTx 00:18:16.167 10:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:16.425 [2024-07-15 10:31:11.016337] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.425 10:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:16.683 10:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:16.940 [2024-07-15 10:31:11.553824] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.940 [2024-07-15 10:31:11.554085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.940 10:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.506 malloc0 00:18:17.506 10:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:17.506 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AZAv4DftTx 00:18:17.765 [2024-07-15 10:31:12.368000] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AZAv4DftTx 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AZAv4DftTx' 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2333556 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2333556 /var/tmp/bdevperf.sock 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2333556 ']' 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.765 10:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.023 [2024-07-15 10:31:12.430949] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:18.023 [2024-07-15 10:31:12.431026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333556 ] 00:18:18.023 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.023 [2024-07-15 10:31:12.491289] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.023 [2024-07-15 10:31:12.600141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.281 10:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.281 10:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:18.281 10:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AZAv4DftTx 00:18:18.541 [2024-07-15 10:31:12.932507] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.541 [2024-07-15 10:31:12.932646] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:18.541 TLSTESTn1 00:18:18.541 10:31:13 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:18.541 Running I/O for 10 seconds... 00:18:30.767 00:18:30.767 Latency(us) 00:18:30.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.767 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.767 Verification LBA range: start 0x0 length 0x2000 00:18:30.767 TLSTESTn1 : 10.04 3159.45 12.34 0.00 0.00 40417.95 6407.96 67186.54 00:18:30.767 =================================================================================================================== 00:18:30.767 Total : 3159.45 12.34 0.00 0.00 40417.95 6407.96 67186.54 00:18:30.767 0 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2333556 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2333556 ']' 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2333556 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2333556 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2333556' 00:18:30.767 killing process with pid 2333556 00:18:30.767 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2333556 00:18:30.767 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.767 00:18:30.767 Latency(us) 00:18:30.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.767 =================================================================================================================== 00:18:30.767 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.767 [2024-07-15 10:31:23.242873] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2333556 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.AZAv4DftTx 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AZAv4DftTx 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AZAv4DftTx 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AZAv4DftTx 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AZAv4DftTx' 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2334868 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2334868 /var/tmp/bdevperf.sock 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2334868 ']' 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.768 [2024-07-15 10:31:23.564039] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:30.768 [2024-07-15 10:31:23.564128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334868 ] 00:18:30.768 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.768 [2024-07-15 10:31:23.621990] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.768 [2024-07-15 10:31:23.724908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:30.768 10:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AZAv4DftTx 00:18:30.768 [2024-07-15 10:31:24.065303] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.768 [2024-07-15 10:31:24.065378] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:30.768 [2024-07-15 10:31:24.065392] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.AZAv4DftTx 00:18:30.768 request: 00:18:30.768 { 00:18:30.768 "name": "TLSTEST", 00:18:30.768 "trtype": "tcp", 00:18:30.768 "traddr": "10.0.0.2", 00:18:30.768 "adrfam": "ipv4", 00:18:30.768 "trsvcid": "4420", 00:18:30.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.768 "prchk_reftag": false, 00:18:30.768 "prchk_guard": false, 00:18:30.768 "hdgst": false, 00:18:30.768 "ddgst": false, 00:18:30.768 "psk": "/tmp/tmp.AZAv4DftTx", 00:18:30.768 "method": "bdev_nvme_attach_controller", 00:18:30.768 "req_id": 1 00:18:30.768 } 00:18:30.768 Got JSON-RPC error response 00:18:30.768 response: 00:18:30.768 { 00:18:30.768 "code": -1, 00:18:30.768 "message": "Operation not permitted" 00:18:30.768 } 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2334868 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2334868 ']' 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2334868 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2334868 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2334868' 00:18:30.768 killing process with pid 2334868 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2334868 00:18:30.768 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.768 00:18:30.768 Latency(us) 00:18:30.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.768 =================================================================================================================== 00:18:30.768 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2334868 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2333344 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2333344 ']' 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2333344 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2333344 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2333344' 00:18:30.768 killing process with pid 2333344 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2333344 00:18:30.768 [2024-07-15 10:31:24.376399] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2333344 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2335013 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2335013 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2335013 ']' 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.768 10:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.768 [2024-07-15 10:31:24.730064] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:30.768 [2024-07-15 10:31:24.730142] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.768 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.768 [2024-07-15 10:31:24.795771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.768 [2024-07-15 10:31:24.916237] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.768 [2024-07-15 10:31:24.916306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.768 [2024-07-15 10:31:24.916322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.768 [2024-07-15 10:31:24.916335] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.769 [2024-07-15 10:31:24.916346] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.769 [2024-07-15 10:31:24.916386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.AZAv4DftTx 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.AZAv4DftTx 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.AZAv4DftTx 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AZAv4DftTx 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:30.769 [2024-07-15 10:31:25.287856] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.769 10:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:31.026 10:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:31.283 [2024-07-15 10:31:25.825289] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.283 [2024-07-15 10:31:25.825529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.283 10:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:31.541 malloc0 00:18:31.541 10:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:31.800 10:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AZAv4DftTx 00:18:32.366 [2024-07-15 10:31:26.719442] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:32.366 [2024-07-15 10:31:26.719487] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:32.366 [2024-07-15 10:31:26.719525] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:32.367 request: 00:18:32.367 { 00:18:32.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.367 "host": "nqn.2016-06.io.spdk:host1", 00:18:32.367 "psk": "/tmp/tmp.AZAv4DftTx", 00:18:32.367 "method": "nvmf_subsystem_add_host", 00:18:32.367 "req_id": 1 00:18:32.367 } 00:18:32.367 Got JSON-RPC error response 00:18:32.367 response: 00:18:32.367 { 00:18:32.367 "code": -32603, 00:18:32.367 "message": "Internal error" 00:18:32.367 } 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2335013 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2335013 ']' 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2335013 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2335013 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2335013' 00:18:32.367 killing process with pid 2335013 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2335013 00:18:32.367 10:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2335013 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.AZAv4DftTx 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2335316 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2335316 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2335316 ']' 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.625 10:31:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.625 [2024-07-15 10:31:27.133762] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:32.625 [2024-07-15 10:31:27.133855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.625 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.625 [2024-07-15 10:31:27.200435] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.885 [2024-07-15 10:31:27.318733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.885 [2024-07-15 10:31:27.318788] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.885 [2024-07-15 10:31:27.318811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.885 [2024-07-15 10:31:27.318824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.885 [2024-07-15 10:31:27.318836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.885 [2024-07-15 10:31:27.318895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.821 10:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.821 10:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:33.821 10:31:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:33.821 10:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:33.821 10:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.821 10:31:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.821 10:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.AZAv4DftTx 00:18:33.821 10:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AZAv4DftTx 00:18:33.821 10:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:33.821 [2024-07-15 10:31:28.409759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.821 10:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:34.078 10:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.336 [2024-07-15 10:31:28.911117] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.336 [2024-07-15 10:31:28.911384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.336 10:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:34.593 malloc0 00:18:34.593 10:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:34.850 10:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AZAv4DftTx 00:18:35.107 [2024-07-15 10:31:29.632208] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:35.107 10:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2335720 00:18:35.107 10:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.107 10:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.107 10:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2335720 /var/tmp/bdevperf.sock 00:18:35.107 10:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2335720 ']' 00:18:35.107 10:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.107 10:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.107 10:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.108 10:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.108 10:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.108 [2024-07-15 10:31:29.694687] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:35.108 [2024-07-15 10:31:29.694777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2335720 ] 00:18:35.108 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.108 [2024-07-15 10:31:29.754449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.364 [2024-07-15 10:31:29.862496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.364 10:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.364 10:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:35.364 10:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AZAv4DftTx 00:18:35.622 [2024-07-15 10:31:30.204055] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.622 [2024-07-15 10:31:30.204213] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:35.881 TLSTESTn1 00:18:35.881 10:31:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:36.140 10:31:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:36.140 "subsystems": [ 00:18:36.140 { 00:18:36.140 "subsystem": "keyring", 00:18:36.140 "config": [] 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "subsystem": "iobuf", 00:18:36.140 "config": [ 00:18:36.140 { 00:18:36.140 "method": "iobuf_set_options", 00:18:36.140 "params": { 00:18:36.140 "small_pool_count": 8192, 00:18:36.140 "large_pool_count": 1024, 00:18:36.140 "small_bufsize": 8192, 00:18:36.140 "large_bufsize": 135168 00:18:36.140 } 00:18:36.140 } 00:18:36.140 ] 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "subsystem": "sock", 00:18:36.140 "config": [ 00:18:36.140 { 00:18:36.140 "method": "sock_set_default_impl", 00:18:36.140 "params": { 00:18:36.140 "impl_name": "posix" 00:18:36.140 } 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "method": "sock_impl_set_options", 00:18:36.140 "params": { 00:18:36.140 "impl_name": "ssl", 00:18:36.140 "recv_buf_size": 4096, 00:18:36.140 "send_buf_size": 4096, 00:18:36.140 "enable_recv_pipe": true, 00:18:36.140 "enable_quickack": false, 00:18:36.140 "enable_placement_id": 0, 00:18:36.140 "enable_zerocopy_send_server": true, 00:18:36.140 "enable_zerocopy_send_client": false, 00:18:36.140 "zerocopy_threshold": 0, 00:18:36.140 "tls_version": 0, 00:18:36.140 "enable_ktls": false 00:18:36.140 } 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "method": "sock_impl_set_options", 00:18:36.140 "params": { 00:18:36.140 "impl_name": "posix", 00:18:36.140 "recv_buf_size": 2097152, 00:18:36.140 "send_buf_size": 2097152, 00:18:36.140 "enable_recv_pipe": true, 00:18:36.140 "enable_quickack": false, 00:18:36.140 "enable_placement_id": 0, 00:18:36.140 "enable_zerocopy_send_server": true, 00:18:36.140 "enable_zerocopy_send_client": false, 00:18:36.140 "zerocopy_threshold": 0, 00:18:36.140 "tls_version": 0, 00:18:36.140 "enable_ktls": false 00:18:36.140 } 00:18:36.140 } 00:18:36.140 ] 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "subsystem": "vmd", 00:18:36.140 "config": [] 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "subsystem": "accel", 00:18:36.140 "config": [ 00:18:36.140 { 00:18:36.140 "method": "accel_set_options", 00:18:36.140 "params": { 00:18:36.140 "small_cache_size": 128, 00:18:36.140 "large_cache_size": 16, 00:18:36.140 "task_count": 2048, 00:18:36.140 "sequence_count": 2048, 00:18:36.140 "buf_count": 2048 00:18:36.140 } 00:18:36.140 } 00:18:36.140 ] 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "subsystem": "bdev", 00:18:36.140 "config": [ 00:18:36.140 { 00:18:36.140 "method": "bdev_set_options", 00:18:36.140 "params": { 00:18:36.140 "bdev_io_pool_size": 65535, 00:18:36.140 "bdev_io_cache_size": 256, 00:18:36.140 "bdev_auto_examine": true, 00:18:36.140 "iobuf_small_cache_size": 128, 00:18:36.140 "iobuf_large_cache_size": 16 00:18:36.140 } 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "method": "bdev_raid_set_options", 00:18:36.140 "params": { 00:18:36.140 "process_window_size_kb": 1024 00:18:36.140 } 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "method": "bdev_iscsi_set_options", 00:18:36.140 "params": { 00:18:36.140 "timeout_sec": 30 00:18:36.140 } 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "method": "bdev_nvme_set_options", 00:18:36.140 "params": { 00:18:36.140 "action_on_timeout": "none", 00:18:36.140 "timeout_us": 0, 00:18:36.140 "timeout_admin_us": 0, 00:18:36.140 "keep_alive_timeout_ms": 10000, 00:18:36.140 "arbitration_burst": 0, 00:18:36.140 "low_priority_weight": 0, 00:18:36.140 "medium_priority_weight": 0, 00:18:36.140 "high_priority_weight": 0, 00:18:36.140 "nvme_adminq_poll_period_us": 10000, 00:18:36.140 "nvme_ioq_poll_period_us": 0, 00:18:36.140 "io_queue_requests": 0, 00:18:36.140 "delay_cmd_submit": true, 00:18:36.140 "transport_retry_count": 4, 00:18:36.140 "bdev_retry_count": 3, 00:18:36.140 "transport_ack_timeout": 0, 00:18:36.140 "ctrlr_loss_timeout_sec": 0, 00:18:36.140 "reconnect_delay_sec": 0, 00:18:36.140 "fast_io_fail_timeout_sec": 0, 00:18:36.140 "disable_auto_failback": false, 00:18:36.140 "generate_uuids": false, 00:18:36.140 "transport_tos": 0, 00:18:36.140 "nvme_error_stat": false, 00:18:36.140 "rdma_srq_size": 0, 00:18:36.140 "io_path_stat": false, 00:18:36.140 "allow_accel_sequence": false, 00:18:36.140 "rdma_max_cq_size": 0, 00:18:36.140 "rdma_cm_event_timeout_ms": 0, 00:18:36.140 "dhchap_digests": [ 00:18:36.140 "sha256", 00:18:36.140 "sha384", 00:18:36.140 "sha512" 00:18:36.140 ], 00:18:36.140 "dhchap_dhgroups": [ 00:18:36.140 "null", 00:18:36.140 "ffdhe2048", 00:18:36.140 "ffdhe3072", 00:18:36.140 "ffdhe4096", 00:18:36.140 "ffdhe6144", 00:18:36.140 "ffdhe8192" 00:18:36.140 ] 00:18:36.140 } 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "method": "bdev_nvme_set_hotplug", 00:18:36.140 "params": { 00:18:36.140 "period_us": 100000, 00:18:36.140 "enable": false 00:18:36.140 } 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "method": "bdev_malloc_create", 00:18:36.140 "params": { 00:18:36.140 "name": "malloc0", 00:18:36.140 "num_blocks": 8192, 00:18:36.140 "block_size": 4096, 00:18:36.140 "physical_block_size": 4096, 00:18:36.140 "uuid": "749f1b28-6236-4475-b019-e54b12091de4", 00:18:36.140 "optimal_io_boundary": 0 00:18:36.140 } 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "method": "bdev_wait_for_examine" 00:18:36.140 } 00:18:36.140 ] 00:18:36.141 }, 00:18:36.141 { 00:18:36.141 "subsystem": "nbd", 00:18:36.141 "config": [] 00:18:36.141 }, 00:18:36.141 { 00:18:36.141 "subsystem": "scheduler", 00:18:36.141 "config": [ 00:18:36.141 { 00:18:36.141 "method": "framework_set_scheduler", 00:18:36.141 "params": { 00:18:36.141 "name": "static" 00:18:36.141 } 00:18:36.141 } 00:18:36.141 ] 00:18:36.141 }, 00:18:36.141 { 00:18:36.141 "subsystem": "nvmf", 00:18:36.141 "config": [ 00:18:36.141 { 00:18:36.141 "method": "nvmf_set_config", 00:18:36.141 "params": { 00:18:36.141 "discovery_filter": "match_any", 00:18:36.141 "admin_cmd_passthru": { 00:18:36.141 "identify_ctrlr": false 00:18:36.141 } 00:18:36.141 } 00:18:36.141 }, 00:18:36.141 { 00:18:36.141 "method": "nvmf_set_max_subsystems", 00:18:36.141 "params": { 00:18:36.141 "max_subsystems": 1024 00:18:36.141 } 00:18:36.141 }, 00:18:36.141 { 00:18:36.141 "method": "nvmf_set_crdt", 00:18:36.141 "params": { 00:18:36.141 "crdt1": 0, 00:18:36.141 "crdt2": 0, 00:18:36.141 "crdt3": 0 00:18:36.141 } 00:18:36.141 }, 00:18:36.141 { 00:18:36.141 "method": "nvmf_create_transport", 00:18:36.141 "params": { 00:18:36.141 "trtype": "TCP", 00:18:36.141 "max_queue_depth": 128, 00:18:36.141 "max_io_qpairs_per_ctrlr": 127, 00:18:36.141 "in_capsule_data_size": 4096, 00:18:36.141 "max_io_size": 131072, 00:18:36.141 "io_unit_size": 131072, 00:18:36.141 "max_aq_depth": 128, 00:18:36.141 "num_shared_buffers": 511, 00:18:36.141 "buf_cache_size": 4294967295, 00:18:36.141 "dif_insert_or_strip": false, 00:18:36.141 "zcopy": false, 00:18:36.141 "c2h_success": false, 00:18:36.141 "sock_priority": 0, 00:18:36.141 "abort_timeout_sec": 1, 00:18:36.141 "ack_timeout": 0, 00:18:36.141 "data_wr_pool_size": 0 00:18:36.141 } 00:18:36.141 }, 00:18:36.141 { 00:18:36.141 "method": "nvmf_create_subsystem", 00:18:36.141 "params": { 00:18:36.141 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.141 "allow_any_host": false, 00:18:36.141 "serial_number": "SPDK00000000000001", 00:18:36.141 "model_number": "SPDK bdev Controller", 00:18:36.141 "max_namespaces": 10, 00:18:36.141 "min_cntlid": 1, 00:18:36.141 "max_cntlid": 65519, 00:18:36.141 "ana_reporting": false 00:18:36.141 } 00:18:36.141 }, 00:18:36.141 { 00:18:36.141 "method": "nvmf_subsystem_add_host", 00:18:36.141 "params": { 00:18:36.141 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.141 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.141 "psk": "/tmp/tmp.AZAv4DftTx" 00:18:36.141 } 00:18:36.141 }, 00:18:36.141 { 00:18:36.141 "method": "nvmf_subsystem_add_ns", 00:18:36.141 "params": { 00:18:36.141 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.141 "namespace": { 00:18:36.141 "nsid": 1, 00:18:36.141 "bdev_name": "malloc0", 00:18:36.141 "nguid": "749F1B2862364475B019E54B12091DE4", 00:18:36.141 "uuid": "749f1b28-6236-4475-b019-e54b12091de4", 00:18:36.141 "no_auto_visible": false 00:18:36.141 } 00:18:36.141 } 00:18:36.141 }, 00:18:36.141 { 00:18:36.141 "method": "nvmf_subsystem_add_listener", 00:18:36.141 "params": { 00:18:36.141 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.141 "listen_address": { 00:18:36.141 "trtype": "TCP", 00:18:36.141 "adrfam": "IPv4", 00:18:36.141 "traddr": "10.0.0.2", 00:18:36.141 "trsvcid": "4420" 00:18:36.141 }, 00:18:36.141 "secure_channel": true 00:18:36.141 } 00:18:36.141 } 00:18:36.141 ] 00:18:36.141 } 00:18:36.141 ] 00:18:36.141 }' 00:18:36.141 10:31:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:36.401 "subsystems": [ 00:18:36.401 { 00:18:36.401 "subsystem": "keyring", 00:18:36.401 "config": [] 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "subsystem": "iobuf", 00:18:36.401 "config": [ 00:18:36.401 { 00:18:36.401 "method": "iobuf_set_options", 00:18:36.401 "params": { 00:18:36.401 "small_pool_count": 8192, 00:18:36.401 "large_pool_count": 1024, 00:18:36.401 "small_bufsize": 8192, 00:18:36.401 "large_bufsize": 135168 00:18:36.401 } 00:18:36.401 } 00:18:36.401 ] 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "subsystem": "sock", 00:18:36.401 "config": [ 00:18:36.401 { 00:18:36.401 "method": "sock_set_default_impl", 00:18:36.401 "params": { 00:18:36.401 "impl_name": "posix" 00:18:36.401 } 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "method": "sock_impl_set_options", 00:18:36.401 "params": { 00:18:36.401 "impl_name": "ssl", 00:18:36.401 "recv_buf_size": 4096, 00:18:36.401 "send_buf_size": 4096, 00:18:36.401 "enable_recv_pipe": true, 00:18:36.401 "enable_quickack": false, 00:18:36.401 "enable_placement_id": 0, 00:18:36.401 "enable_zerocopy_send_server": true, 00:18:36.401 "enable_zerocopy_send_client": false, 00:18:36.401 "zerocopy_threshold": 0, 00:18:36.401 "tls_version": 0, 00:18:36.401 "enable_ktls": false 00:18:36.401 } 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "method": "sock_impl_set_options", 00:18:36.401 "params": { 00:18:36.401 "impl_name": "posix", 00:18:36.401 "recv_buf_size": 2097152, 00:18:36.401 "send_buf_size": 2097152, 00:18:36.401 "enable_recv_pipe": true, 00:18:36.401 "enable_quickack": false, 00:18:36.401 "enable_placement_id": 0, 00:18:36.401 "enable_zerocopy_send_server": true, 00:18:36.401 "enable_zerocopy_send_client": false, 00:18:36.401 "zerocopy_threshold": 0, 00:18:36.401 "tls_version": 0, 00:18:36.401 "enable_ktls": false 00:18:36.401 } 00:18:36.401 } 00:18:36.401 ] 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "subsystem": "vmd", 00:18:36.401 "config": [] 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "subsystem": "accel", 00:18:36.401 "config": [ 00:18:36.401 { 00:18:36.401 "method": "accel_set_options", 00:18:36.401 "params": { 00:18:36.401 "small_cache_size": 128, 00:18:36.401 "large_cache_size": 16, 00:18:36.401 "task_count": 2048, 00:18:36.401 "sequence_count": 2048, 00:18:36.401 "buf_count": 2048 00:18:36.401 } 00:18:36.401 } 00:18:36.401 ] 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "subsystem": "bdev", 00:18:36.401 "config": [ 00:18:36.401 { 00:18:36.401 "method": "bdev_set_options", 00:18:36.401 "params": { 00:18:36.401 "bdev_io_pool_size": 65535, 00:18:36.401 "bdev_io_cache_size": 256, 00:18:36.401 "bdev_auto_examine": true, 00:18:36.401 "iobuf_small_cache_size": 128, 00:18:36.401 "iobuf_large_cache_size": 16 00:18:36.401 } 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "method": "bdev_raid_set_options", 00:18:36.401 "params": { 00:18:36.401 "process_window_size_kb": 1024 00:18:36.401 } 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "method": "bdev_iscsi_set_options", 00:18:36.401 "params": { 00:18:36.401 "timeout_sec": 30 00:18:36.401 } 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "method": "bdev_nvme_set_options", 00:18:36.401 "params": { 00:18:36.401 "action_on_timeout": "none", 00:18:36.401 "timeout_us": 0, 00:18:36.401 "timeout_admin_us": 0, 00:18:36.401 "keep_alive_timeout_ms": 10000, 00:18:36.401 "arbitration_burst": 0, 00:18:36.401 "low_priority_weight": 0, 00:18:36.401 "medium_priority_weight": 0, 00:18:36.401 "high_priority_weight": 0, 00:18:36.401 "nvme_adminq_poll_period_us": 10000, 00:18:36.401 "nvme_ioq_poll_period_us": 0, 00:18:36.401 "io_queue_requests": 512, 00:18:36.401 "delay_cmd_submit": true, 00:18:36.401 "transport_retry_count": 4, 00:18:36.401 "bdev_retry_count": 3, 00:18:36.401 "transport_ack_timeout": 0, 00:18:36.401 "ctrlr_loss_timeout_sec": 0, 00:18:36.401 "reconnect_delay_sec": 0, 00:18:36.401 "fast_io_fail_timeout_sec": 0, 00:18:36.401 "disable_auto_failback": false, 00:18:36.401 "generate_uuids": false, 00:18:36.401 "transport_tos": 0, 00:18:36.401 "nvme_error_stat": false, 00:18:36.401 "rdma_srq_size": 0, 00:18:36.401 "io_path_stat": false, 00:18:36.401 "allow_accel_sequence": false, 00:18:36.401 "rdma_max_cq_size": 0, 00:18:36.401 "rdma_cm_event_timeout_ms": 0, 00:18:36.401 "dhchap_digests": [ 00:18:36.401 "sha256", 00:18:36.401 "sha384", 00:18:36.401 "sha512" 00:18:36.401 ], 00:18:36.401 "dhchap_dhgroups": [ 00:18:36.401 "null", 00:18:36.401 "ffdhe2048", 00:18:36.401 "ffdhe3072", 00:18:36.401 "ffdhe4096", 00:18:36.401 "ffdhe6144", 00:18:36.401 "ffdhe8192" 00:18:36.401 ] 00:18:36.401 } 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "method": "bdev_nvme_attach_controller", 00:18:36.401 "params": { 00:18:36.401 "name": "TLSTEST", 00:18:36.401 "trtype": "TCP", 00:18:36.401 "adrfam": "IPv4", 00:18:36.401 "traddr": "10.0.0.2", 00:18:36.401 "trsvcid": "4420", 00:18:36.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.401 "prchk_reftag": false, 00:18:36.401 "prchk_guard": false, 00:18:36.401 "ctrlr_loss_timeout_sec": 0, 00:18:36.401 "reconnect_delay_sec": 0, 00:18:36.401 "fast_io_fail_timeout_sec": 0, 00:18:36.401 "psk": "/tmp/tmp.AZAv4DftTx", 00:18:36.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.401 "hdgst": false, 00:18:36.401 "ddgst": false 00:18:36.401 } 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "method": "bdev_nvme_set_hotplug", 00:18:36.401 "params": { 00:18:36.401 "period_us": 100000, 00:18:36.401 "enable": false 00:18:36.401 } 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "method": "bdev_wait_for_examine" 00:18:36.401 } 00:18:36.401 ] 00:18:36.401 }, 00:18:36.401 { 00:18:36.401 "subsystem": "nbd", 00:18:36.401 "config": [] 00:18:36.401 } 00:18:36.401 ] 00:18:36.401 }' 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2335720 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2335720 ']' 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2335720 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2335720 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2335720' 00:18:36.401 killing process with pid 2335720 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2335720 00:18:36.401 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.401 00:18:36.401 Latency(us) 00:18:36.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.401 =================================================================================================================== 00:18:36.401 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.401 [2024-07-15 10:31:30.979407] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:36.401 10:31:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2335720 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2335316 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2335316 ']' 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2335316 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2335316 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2335316' 00:18:36.662 killing process with pid 2335316 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2335316 00:18:36.662 [2024-07-15 10:31:31.281725] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:36.662 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2335316 00:18:37.231 10:31:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:37.231 10:31:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.231 10:31:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:37.231 "subsystems": [ 00:18:37.231 { 00:18:37.231 "subsystem": "keyring", 00:18:37.231 "config": [] 00:18:37.231 }, 00:18:37.231 { 00:18:37.231 "subsystem": "iobuf", 00:18:37.231 "config": [ 00:18:37.231 { 00:18:37.231 "method": "iobuf_set_options", 00:18:37.231 "params": { 00:18:37.231 "small_pool_count": 8192, 00:18:37.231 "large_pool_count": 1024, 00:18:37.231 "small_bufsize": 8192, 00:18:37.231 "large_bufsize": 135168 00:18:37.231 } 00:18:37.231 } 00:18:37.231 ] 00:18:37.231 }, 00:18:37.231 { 00:18:37.232 "subsystem": "sock", 00:18:37.232 "config": [ 00:18:37.232 { 00:18:37.232 "method": "sock_set_default_impl", 00:18:37.232 "params": { 00:18:37.232 "impl_name": "posix" 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "sock_impl_set_options", 00:18:37.232 "params": { 00:18:37.232 "impl_name": "ssl", 00:18:37.232 "recv_buf_size": 4096, 00:18:37.232 "send_buf_size": 4096, 00:18:37.232 "enable_recv_pipe": true, 00:18:37.232 "enable_quickack": false, 00:18:37.232 "enable_placement_id": 0, 00:18:37.232 "enable_zerocopy_send_server": true, 00:18:37.232 "enable_zerocopy_send_client": false, 00:18:37.232 "zerocopy_threshold": 0, 00:18:37.232 "tls_version": 0, 00:18:37.232 "enable_ktls": false 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "sock_impl_set_options", 00:18:37.232 "params": { 00:18:37.232 "impl_name": "posix", 00:18:37.232 "recv_buf_size": 2097152, 00:18:37.232 "send_buf_size": 2097152, 00:18:37.232 "enable_recv_pipe": true, 00:18:37.232 "enable_quickack": false, 00:18:37.232 "enable_placement_id": 0, 00:18:37.232 "enable_zerocopy_send_server": true, 00:18:37.232 "enable_zerocopy_send_client": false, 00:18:37.232 "zerocopy_threshold": 0, 00:18:37.232 "tls_version": 0, 00:18:37.232 "enable_ktls": false 00:18:37.232 } 00:18:37.232 } 00:18:37.232 ] 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "subsystem": "vmd", 00:18:37.232 "config": [] 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "subsystem": "accel", 00:18:37.232 "config": [ 00:18:37.232 { 00:18:37.232 "method": "accel_set_options", 00:18:37.232 "params": { 00:18:37.232 "small_cache_size": 128, 00:18:37.232 "large_cache_size": 16, 00:18:37.232 "task_count": 2048, 00:18:37.232 "sequence_count": 2048, 00:18:37.232 "buf_count": 2048 00:18:37.232 } 00:18:37.232 } 00:18:37.232 ] 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "subsystem": "bdev", 00:18:37.232 "config": [ 00:18:37.232 { 00:18:37.232 "method": "bdev_set_options", 00:18:37.232 "params": { 00:18:37.232 "bdev_io_pool_size": 65535, 00:18:37.232 "bdev_io_cache_size": 256, 00:18:37.232 "bdev_auto_examine": true, 00:18:37.232 "iobuf_small_cache_size": 128, 00:18:37.232 "iobuf_large_cache_size": 16 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "bdev_raid_set_options", 00:18:37.232 "params": { 00:18:37.232 "process_window_size_kb": 1024 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "bdev_iscsi_set_options", 00:18:37.232 "params": { 00:18:37.232 "timeout_sec": 30 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "bdev_nvme_set_options", 00:18:37.232 "params": { 00:18:37.232 "action_on_timeout": "none", 00:18:37.232 "timeout_us": 0, 00:18:37.232 "timeout_admin_us": 0, 00:18:37.232 "keep_alive_timeout_ms": 10000, 00:18:37.232 "arbitration_burst": 0, 00:18:37.232 "low_priority_weight": 0, 00:18:37.232 "medium_priority_weight": 0, 00:18:37.232 "high_priority_weight": 0, 00:18:37.232 "nvme_adminq_poll_period_us": 10000, 00:18:37.232 "nvme_ioq_poll_period_us": 0, 00:18:37.232 "io_queue_requests": 0, 00:18:37.232 "delay_cmd_submit": true, 00:18:37.232 "transport_retry_count": 4, 00:18:37.232 "bdev_retry_count": 3, 00:18:37.232 "transport_ack_timeout": 0, 00:18:37.232 "ctrlr_loss_timeout_sec": 0, 00:18:37.232 "reconnect_delay_sec": 0, 00:18:37.232 "fast_io_fail_timeout_sec": 0, 00:18:37.232 "disable_auto_failback": false, 00:18:37.232 "generate_uuids": false, 00:18:37.232 "transport_tos": 0, 00:18:37.232 "nvme_error_stat": false, 00:18:37.232 "rdma_srq_size": 0, 00:18:37.232 "io_path_stat": false, 00:18:37.232 "allow_accel_sequence": false, 00:18:37.232 "rdma_max_cq_size": 0, 00:18:37.232 "rdma_cm_event_timeout_ms": 0, 00:18:37.232 "dhchap_digests": [ 00:18:37.232 "sha256", 00:18:37.232 "sha384", 00:18:37.232 "sha512" 00:18:37.232 ], 00:18:37.232 "dhchap_dhgroups": [ 00:18:37.232 "null", 00:18:37.232 "ffdhe2048", 00:18:37.232 "ffdhe3072", 00:18:37.232 "ffdhe4096", 00:18:37.232 "ffdhe6144", 00:18:37.232 "ffdhe8192" 00:18:37.232 ] 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "bdev_nvme_set_hotplug", 00:18:37.232 "params": { 00:18:37.232 "period_us": 100000, 00:18:37.232 "enable": false 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "bdev_malloc_create", 00:18:37.232 "params": { 00:18:37.232 "name": "malloc0", 00:18:37.232 "num_blocks": 8192, 00:18:37.232 "block_size": 4096, 00:18:37.232 "physical_block_size": 4096, 00:18:37.232 "uuid": "749f1b28-6236-4475-b019-e54b12091de4", 00:18:37.232 "optimal_io_boundary": 0 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "bdev_wait_for_examine" 00:18:37.232 } 00:18:37.232 ] 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "subsystem": "nbd", 00:18:37.232 "config": [] 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "subsystem": "scheduler", 00:18:37.232 "config": [ 00:18:37.232 { 00:18:37.232 "method": "framework_set_scheduler", 00:18:37.232 "params": { 00:18:37.232 "name": "static" 00:18:37.232 } 00:18:37.232 } 00:18:37.232 ] 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "subsystem": "nvmf", 00:18:37.232 "config": [ 00:18:37.232 { 00:18:37.232 "method": "nvmf_set_config", 00:18:37.232 "params": { 00:18:37.232 "discovery_filter": "match_any", 00:18:37.232 "admin_cmd_passthru": { 00:18:37.232 "identify_ctrlr": false 00:18:37.232 } 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "nvmf_set_max_subsystems", 00:18:37.232 "params": { 00:18:37.232 "max_subsystems": 1024 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "nvmf_set_crdt", 00:18:37.232 "params": { 00:18:37.232 "crdt1": 0, 00:18:37.232 "crdt2": 0, 00:18:37.232 "crdt3": 0 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "nvmf_create_transport", 00:18:37.232 "params": { 00:18:37.232 "trtype": "TCP", 00:18:37.232 "max_queue_depth": 128, 00:18:37.232 "max_io_qpairs_per_ctrlr": 127, 00:18:37.232 "in_capsule_data_size": 4096, 00:18:37.232 "max_io_size": 131072, 00:18:37.232 "io_unit_size": 131072, 00:18:37.232 "max_aq_depth": 128, 00:18:37.232 "num_shared_buffers": 511, 00:18:37.232 "buf_cache_size": 4294967295, 00:18:37.232 "dif_insert_or_strip": false, 00:18:37.232 "zcopy": false, 00:18:37.232 "c2h_success": false, 00:18:37.232 "sock_priority": 0, 00:18:37.232 "abort_timeout_sec": 1, 00:18:37.232 "ack_timeout": 0, 00:18:37.232 "data_wr_pool_size": 0 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "nvmf_create_subsystem", 00:18:37.232 "params": { 00:18:37.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.232 "allow_any_host": false, 00:18:37.232 "serial_number": "SPDK00000000000001", 00:18:37.232 "model_number": "SPDK bdev Controller", 00:18:37.232 "max_namespaces": 10, 00:18:37.232 "min_cntlid": 1, 00:18:37.232 "max_cntlid": 65519, 00:18:37.232 "ana_reporting": false 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "nvmf_subsystem_add_host", 00:18:37.232 "params": { 00:18:37.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.232 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.232 "psk": "/tmp/tmp.AZAv4DftTx" 00:18:37.232 } 00:18:37.232 }, 00:18:37.232 { 00:18:37.232 "method": "nvmf_subsystem_add_ns", 00:18:37.232 "params": { 00:18:37.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.232 "namespace": { 00:18:37.232 "nsid": 1, 00:18:37.232 "bdev_name": "malloc0", 00:18:37.232 "nguid": "749F1B2862364475B019E54B12091DE4", 00:18:37.232 "uuid": "749f1b28-6236-4475-b019-e54b12091de4", 00:18:37.233 "no_auto_visible": false 00:18:37.233 } 00:18:37.233 } 00:18:37.233 }, 00:18:37.233 { 00:18:37.233 "method": "nvmf_subsystem_add_listener", 00:18:37.233 "params": { 00:18:37.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.233 "listen_address": { 00:18:37.233 "trtype": "TCP", 00:18:37.233 "adrfam": "IPv4", 00:18:37.233 "traddr": "10.0.0.2", 00:18:37.233 "trsvcid": "4420" 00:18:37.233 }, 00:18:37.233 "secure_channel": true 00:18:37.233 } 00:18:37.233 } 00:18:37.233 ] 00:18:37.233 } 00:18:37.233 ] 00:18:37.233 }' 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2335879 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2335879 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2335879 ']' 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.233 10:31:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.233 [2024-07-15 10:31:31.641852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:37.233 [2024-07-15 10:31:31.641963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.233 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.233 [2024-07-15 10:31:31.709857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.233 [2024-07-15 10:31:31.824527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.233 [2024-07-15 10:31:31.824588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.233 [2024-07-15 10:31:31.824612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.233 [2024-07-15 10:31:31.824626] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.233 [2024-07-15 10:31:31.824638] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.233 [2024-07-15 10:31:31.824731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.492 [2024-07-15 10:31:32.061275] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.492 [2024-07-15 10:31:32.077223] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:37.492 [2024-07-15 10:31:32.093285] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.492 [2024-07-15 10:31:32.108054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2336032 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2336032 /var/tmp/bdevperf.sock 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2336032 ']' 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.058 10:31:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:38.058 "subsystems": [ 00:18:38.058 { 00:18:38.058 "subsystem": "keyring", 00:18:38.058 "config": [] 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "subsystem": "iobuf", 00:18:38.058 "config": [ 00:18:38.058 { 00:18:38.058 "method": "iobuf_set_options", 00:18:38.058 "params": { 00:18:38.058 "small_pool_count": 8192, 00:18:38.058 "large_pool_count": 1024, 00:18:38.058 "small_bufsize": 8192, 00:18:38.058 "large_bufsize": 135168 00:18:38.058 } 00:18:38.058 } 00:18:38.058 ] 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "subsystem": "sock", 00:18:38.058 "config": [ 00:18:38.058 { 00:18:38.058 "method": "sock_set_default_impl", 00:18:38.058 "params": { 00:18:38.058 "impl_name": "posix" 00:18:38.058 } 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "method": "sock_impl_set_options", 00:18:38.058 "params": { 00:18:38.058 "impl_name": "ssl", 00:18:38.058 "recv_buf_size": 4096, 00:18:38.058 "send_buf_size": 4096, 00:18:38.058 "enable_recv_pipe": true, 00:18:38.058 "enable_quickack": false, 00:18:38.058 "enable_placement_id": 0, 00:18:38.058 "enable_zerocopy_send_server": true, 00:18:38.058 "enable_zerocopy_send_client": false, 00:18:38.058 "zerocopy_threshold": 0, 00:18:38.058 "tls_version": 0, 00:18:38.058 "enable_ktls": false 00:18:38.058 } 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "method": "sock_impl_set_options", 00:18:38.058 "params": { 00:18:38.058 "impl_name": "posix", 00:18:38.058 "recv_buf_size": 2097152, 00:18:38.058 "send_buf_size": 2097152, 00:18:38.058 "enable_recv_pipe": true, 00:18:38.058 "enable_quickack": false, 00:18:38.058 "enable_placement_id": 0, 00:18:38.058 "enable_zerocopy_send_server": true, 00:18:38.058 "enable_zerocopy_send_client": false, 00:18:38.058 "zerocopy_threshold": 0, 00:18:38.058 "tls_version": 0, 00:18:38.058 "enable_ktls": false 00:18:38.058 } 00:18:38.058 } 00:18:38.058 ] 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "subsystem": "vmd", 00:18:38.058 "config": [] 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "subsystem": "accel", 00:18:38.058 "config": [ 00:18:38.058 { 00:18:38.058 "method": "accel_set_options", 00:18:38.058 "params": { 00:18:38.058 "small_cache_size": 128, 00:18:38.058 "large_cache_size": 16, 00:18:38.058 "task_count": 2048, 00:18:38.058 "sequence_count": 2048, 00:18:38.058 "buf_count": 2048 00:18:38.058 } 00:18:38.058 } 00:18:38.058 ] 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "subsystem": "bdev", 00:18:38.058 "config": [ 00:18:38.058 { 00:18:38.058 "method": "bdev_set_options", 00:18:38.058 "params": { 00:18:38.058 "bdev_io_pool_size": 65535, 00:18:38.058 "bdev_io_cache_size": 256, 00:18:38.058 "bdev_auto_examine": true, 00:18:38.058 "iobuf_small_cache_size": 128, 00:18:38.058 "iobuf_large_cache_size": 16 00:18:38.058 } 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "method": "bdev_raid_set_options", 00:18:38.058 "params": { 00:18:38.058 "process_window_size_kb": 1024 00:18:38.058 } 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "method": "bdev_iscsi_set_options", 00:18:38.058 "params": { 00:18:38.058 "timeout_sec": 30 00:18:38.058 } 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "method": "bdev_nvme_set_options", 00:18:38.058 "params": { 00:18:38.058 "action_on_timeout": "none", 00:18:38.058 "timeout_us": 0, 00:18:38.058 "timeout_admin_us": 0, 00:18:38.058 "keep_alive_timeout_ms": 10000, 00:18:38.058 "arbitration_burst": 0, 00:18:38.058 "low_priority_weight": 0, 00:18:38.058 "medium_priority_weight": 0, 00:18:38.058 "high_priority_weight": 0, 00:18:38.058 "nvme_adminq_poll_period_us": 10000, 00:18:38.058 "nvme_ioq_poll_period_us": 0, 00:18:38.058 "io_queue_requests": 512, 00:18:38.058 "delay_cmd_submit": true, 00:18:38.058 "transport_retry_count": 4, 00:18:38.058 "bdev_retry_count": 3, 00:18:38.058 "transport_ack_timeout": 0, 00:18:38.058 "ctrlr_loss_timeout_sec": 0, 00:18:38.058 "reconnect_delay_sec": 0, 00:18:38.058 "fast_io_fail_timeout_sec": 0, 00:18:38.058 "disable_auto_failback": false, 00:18:38.058 "generate_uuids": false, 00:18:38.058 "transport_tos": 0, 00:18:38.058 "nvme_error_stat": false, 00:18:38.058 "rdma_srq_size": 0, 00:18:38.058 "io_path_stat": false, 00:18:38.058 "allow_accel_sequence": false, 00:18:38.058 "rdma_max_cq_size": 0, 00:18:38.058 "rdma_cm_event_timeout_ms": 0, 00:18:38.058 "dhchap_digests": [ 00:18:38.058 "sha256", 00:18:38.058 "sha384", 00:18:38.058 "sha512" 00:18:38.058 ], 00:18:38.058 "dhchap_dhgroups": [ 00:18:38.058 "null", 00:18:38.058 "ffdhe2048", 00:18:38.058 "ffdhe3072", 00:18:38.058 "ffdhe4096", 00:18:38.058 "ffdhe6144", 00:18:38.058 "ffdhe8192" 00:18:38.058 ] 00:18:38.058 } 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "method": "bdev_nvme_attach_controller", 00:18:38.058 "params": { 00:18:38.058 "name": "TLSTEST", 00:18:38.058 "trtype": "TCP", 00:18:38.058 "adrfam": "IPv4", 00:18:38.058 "traddr": "10.0.0.2", 00:18:38.058 "trsvcid": "4420", 00:18:38.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.058 "prchk_reftag": false, 00:18:38.058 "prchk_guard": false, 00:18:38.058 "ctrlr_loss_timeout_sec": 0, 00:18:38.058 "reconnect_delay_sec": 0, 00:18:38.058 "fast_io_fail_timeout_sec": 0, 00:18:38.058 "psk": "/tmp/tmp.AZAv4DftTx", 00:18:38.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.058 "hdgst": false, 00:18:38.058 "ddgst": false 00:18:38.058 } 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "method": "bdev_nvme_set_hotplug", 00:18:38.058 "params": { 00:18:38.058 "period_us": 100000, 00:18:38.058 "enable": false 00:18:38.058 } 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "method": "bdev_wait_for_examine" 00:18:38.058 } 00:18:38.058 ] 00:18:38.058 }, 00:18:38.058 { 00:18:38.058 "subsystem": "nbd", 00:18:38.058 "config": [] 00:18:38.058 } 00:18:38.058 ] 00:18:38.058 }' 00:18:38.058 [2024-07-15 10:31:32.632682] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:38.058 [2024-07-15 10:31:32.632766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336032 ] 00:18:38.058 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.058 [2024-07-15 10:31:32.690399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.317 [2024-07-15 10:31:32.799222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.317 [2024-07-15 10:31:32.964584] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.317 [2024-07-15 10:31:32.964708] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:39.300 10:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.300 10:31:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:39.300 10:31:33 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:39.300 Running I/O for 10 seconds... 00:18:49.282 00:18:49.282 Latency(us) 00:18:49.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.282 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.282 Verification LBA range: start 0x0 length 0x2000 00:18:49.282 TLSTESTn1 : 10.04 2827.56 11.05 0.00 0.00 45156.78 5801.15 75342.13 00:18:49.282 =================================================================================================================== 00:18:49.282 Total : 2827.56 11.05 0.00 0.00 45156.78 5801.15 75342.13 00:18:49.282 0 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2336032 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2336032 ']' 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2336032 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2336032 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2336032' 00:18:49.282 killing process with pid 2336032 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2336032 00:18:49.282 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.282 00:18:49.282 Latency(us) 00:18:49.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.282 =================================================================================================================== 00:18:49.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.282 [2024-07-15 10:31:43.868952] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:49.282 10:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2336032 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2335879 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2335879 ']' 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2335879 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2335879 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2335879' 00:18:49.539 killing process with pid 2335879 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2335879 00:18:49.539 [2024-07-15 10:31:44.146636] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:49.539 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2335879 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2337431 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2337431 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2337431 ']' 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.797 10:31:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.057 [2024-07-15 10:31:44.479380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:50.057 [2024-07-15 10:31:44.479452] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.057 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.057 [2024-07-15 10:31:44.546254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.057 [2024-07-15 10:31:44.663984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.057 [2024-07-15 10:31:44.664036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.057 [2024-07-15 10:31:44.664058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.057 [2024-07-15 10:31:44.664069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.057 [2024-07-15 10:31:44.664079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.057 [2024-07-15 10:31:44.664103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.996 10:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.996 10:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:50.996 10:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.996 10:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:50.996 10:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.996 10:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.996 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.AZAv4DftTx 00:18:50.996 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AZAv4DftTx 00:18:50.996 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:51.253 [2024-07-15 10:31:45.676438] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.253 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:51.510 10:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:51.769 [2024-07-15 10:31:46.189803] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.769 [2024-07-15 10:31:46.190081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.769 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:52.026 malloc0 00:18:52.026 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:52.283 10:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AZAv4DftTx 00:18:52.541 [2024-07-15 10:31:47.052106] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:52.541 10:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2337779 00:18:52.541 10:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:52.541 10:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:52.541 10:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2337779 /var/tmp/bdevperf.sock 00:18:52.541 10:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2337779 ']' 00:18:52.541 10:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.541 10:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.541 10:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.541 10:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.541 10:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.541 [2024-07-15 10:31:47.113222] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:52.541 [2024-07-15 10:31:47.113307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337779 ] 00:18:52.541 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.541 [2024-07-15 10:31:47.174957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.799 [2024-07-15 10:31:47.291109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.737 10:31:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.737 10:31:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:53.737 10:31:48 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AZAv4DftTx 00:18:53.737 10:31:48 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:53.996 [2024-07-15 10:31:48.627545] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.253 nvme0n1 00:18:54.254 10:31:48 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.254 Running I/O for 1 seconds... 00:18:55.635 00:18:55.635 Latency(us) 00:18:55.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.635 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:55.635 Verification LBA range: start 0x0 length 0x2000 00:18:55.635 nvme0n1 : 1.04 3177.05 12.41 0.00 0.00 39610.42 6407.96 68739.98 00:18:55.635 =================================================================================================================== 00:18:55.635 Total : 3177.05 12.41 0.00 0.00 39610.42 6407.96 68739.98 00:18:55.635 0 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2337779 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2337779 ']' 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2337779 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2337779 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2337779' 00:18:55.635 killing process with pid 2337779 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2337779 00:18:55.635 Received shutdown signal, test time was about 1.000000 seconds 00:18:55.635 00:18:55.635 Latency(us) 00:18:55.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.635 =================================================================================================================== 00:18:55.635 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.635 10:31:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2337779 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2337431 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2337431 ']' 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2337431 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2337431 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2337431' 00:18:55.635 killing process with pid 2337431 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2337431 00:18:55.635 [2024-07-15 10:31:50.224045] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:55.635 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2337431 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2338186 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2338186 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2338186 ']' 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.894 10:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.154 [2024-07-15 10:31:50.577300] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:56.154 [2024-07-15 10:31:50.577401] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.154 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.154 [2024-07-15 10:31:50.647014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.154 [2024-07-15 10:31:50.759373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.154 [2024-07-15 10:31:50.759438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.154 [2024-07-15 10:31:50.759455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.154 [2024-07-15 10:31:50.759469] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.154 [2024-07-15 10:31:50.759480] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.154 [2024-07-15 10:31:50.759518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.090 [2024-07-15 10:31:51.527090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.090 malloc0 00:18:57.090 [2024-07-15 10:31:51.558943] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:57.090 [2024-07-15 10:31:51.559244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2338335 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2338335 /var/tmp/bdevperf.sock 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2338335 ']' 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.090 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.090 [2024-07-15 10:31:51.630424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:57.090 [2024-07-15 10:31:51.630487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338335 ] 00:18:57.090 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.090 [2024-07-15 10:31:51.693007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.348 [2024-07-15 10:31:51.809383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.348 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.348 10:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:57.348 10:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AZAv4DftTx 00:18:57.606 10:31:52 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:57.864 [2024-07-15 10:31:52.408307] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.864 nvme0n1 00:18:57.864 10:31:52 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:58.122 Running I/O for 1 seconds... 00:18:59.062 00:18:59.062 Latency(us) 00:18:59.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.062 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:59.062 Verification LBA range: start 0x0 length 0x2000 00:18:59.062 nvme0n1 : 1.04 2899.92 11.33 0.00 0.00 43354.07 6407.96 62137.84 00:18:59.062 =================================================================================================================== 00:18:59.062 Total : 2899.92 11.33 0.00 0.00 43354.07 6407.96 62137.84 00:18:59.062 0 00:18:59.062 10:31:53 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:59.062 10:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.062 10:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.322 10:31:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.322 10:31:53 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:59.322 "subsystems": [ 00:18:59.322 { 00:18:59.322 "subsystem": "keyring", 00:18:59.322 "config": [ 00:18:59.322 { 00:18:59.322 "method": "keyring_file_add_key", 00:18:59.322 "params": { 00:18:59.322 "name": "key0", 00:18:59.322 "path": "/tmp/tmp.AZAv4DftTx" 00:18:59.322 } 00:18:59.322 } 00:18:59.322 ] 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "subsystem": "iobuf", 00:18:59.322 "config": [ 00:18:59.322 { 00:18:59.322 "method": "iobuf_set_options", 00:18:59.322 "params": { 00:18:59.322 "small_pool_count": 8192, 00:18:59.322 "large_pool_count": 1024, 00:18:59.322 "small_bufsize": 8192, 00:18:59.322 "large_bufsize": 135168 00:18:59.322 } 00:18:59.322 } 00:18:59.322 ] 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "subsystem": "sock", 00:18:59.322 "config": [ 00:18:59.322 { 00:18:59.322 "method": "sock_set_default_impl", 00:18:59.322 "params": { 00:18:59.322 "impl_name": "posix" 00:18:59.322 } 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "method": "sock_impl_set_options", 00:18:59.322 "params": { 00:18:59.322 "impl_name": "ssl", 00:18:59.322 "recv_buf_size": 4096, 00:18:59.322 "send_buf_size": 4096, 00:18:59.322 "enable_recv_pipe": true, 00:18:59.322 "enable_quickack": false, 00:18:59.322 "enable_placement_id": 0, 00:18:59.322 "enable_zerocopy_send_server": true, 00:18:59.322 "enable_zerocopy_send_client": false, 00:18:59.322 "zerocopy_threshold": 0, 00:18:59.322 "tls_version": 0, 00:18:59.322 "enable_ktls": false 00:18:59.322 } 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "method": "sock_impl_set_options", 00:18:59.322 "params": { 00:18:59.322 "impl_name": "posix", 00:18:59.322 "recv_buf_size": 2097152, 00:18:59.322 "send_buf_size": 2097152, 00:18:59.322 "enable_recv_pipe": true, 00:18:59.322 "enable_quickack": false, 00:18:59.322 "enable_placement_id": 0, 00:18:59.322 "enable_zerocopy_send_server": true, 00:18:59.322 "enable_zerocopy_send_client": false, 00:18:59.322 "zerocopy_threshold": 0, 00:18:59.322 "tls_version": 0, 00:18:59.322 "enable_ktls": false 00:18:59.322 } 00:18:59.322 } 00:18:59.322 ] 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "subsystem": "vmd", 00:18:59.322 "config": [] 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "subsystem": "accel", 00:18:59.322 "config": [ 00:18:59.322 { 00:18:59.322 "method": "accel_set_options", 00:18:59.322 "params": { 00:18:59.322 "small_cache_size": 128, 00:18:59.322 "large_cache_size": 16, 00:18:59.322 "task_count": 2048, 00:18:59.322 "sequence_count": 2048, 00:18:59.322 "buf_count": 2048 00:18:59.322 } 00:18:59.322 } 00:18:59.322 ] 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "subsystem": "bdev", 00:18:59.322 "config": [ 00:18:59.322 { 00:18:59.322 "method": "bdev_set_options", 00:18:59.322 "params": { 00:18:59.322 "bdev_io_pool_size": 65535, 00:18:59.322 "bdev_io_cache_size": 256, 00:18:59.322 "bdev_auto_examine": true, 00:18:59.322 "iobuf_small_cache_size": 128, 00:18:59.322 "iobuf_large_cache_size": 16 00:18:59.322 } 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "method": "bdev_raid_set_options", 00:18:59.322 "params": { 00:18:59.322 "process_window_size_kb": 1024 00:18:59.322 } 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "method": "bdev_iscsi_set_options", 00:18:59.322 "params": { 00:18:59.322 "timeout_sec": 30 00:18:59.322 } 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "method": "bdev_nvme_set_options", 00:18:59.322 "params": { 00:18:59.322 "action_on_timeout": "none", 00:18:59.322 "timeout_us": 0, 00:18:59.322 "timeout_admin_us": 0, 00:18:59.322 "keep_alive_timeout_ms": 10000, 00:18:59.322 "arbitration_burst": 0, 00:18:59.322 "low_priority_weight": 0, 00:18:59.322 "medium_priority_weight": 0, 00:18:59.322 "high_priority_weight": 0, 00:18:59.322 "nvme_adminq_poll_period_us": 10000, 00:18:59.322 "nvme_ioq_poll_period_us": 0, 00:18:59.322 "io_queue_requests": 0, 00:18:59.322 "delay_cmd_submit": true, 00:18:59.322 "transport_retry_count": 4, 00:18:59.322 "bdev_retry_count": 3, 00:18:59.322 "transport_ack_timeout": 0, 00:18:59.322 "ctrlr_loss_timeout_sec": 0, 00:18:59.322 "reconnect_delay_sec": 0, 00:18:59.322 "fast_io_fail_timeout_sec": 0, 00:18:59.322 "disable_auto_failback": false, 00:18:59.322 "generate_uuids": false, 00:18:59.322 "transport_tos": 0, 00:18:59.322 "nvme_error_stat": false, 00:18:59.322 "rdma_srq_size": 0, 00:18:59.322 "io_path_stat": false, 00:18:59.322 "allow_accel_sequence": false, 00:18:59.322 "rdma_max_cq_size": 0, 00:18:59.322 "rdma_cm_event_timeout_ms": 0, 00:18:59.322 "dhchap_digests": [ 00:18:59.322 "sha256", 00:18:59.322 "sha384", 00:18:59.322 "sha512" 00:18:59.322 ], 00:18:59.322 "dhchap_dhgroups": [ 00:18:59.322 "null", 00:18:59.322 "ffdhe2048", 00:18:59.322 "ffdhe3072", 00:18:59.322 "ffdhe4096", 00:18:59.322 "ffdhe6144", 00:18:59.322 "ffdhe8192" 00:18:59.322 ] 00:18:59.322 } 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "method": "bdev_nvme_set_hotplug", 00:18:59.322 "params": { 00:18:59.322 "period_us": 100000, 00:18:59.322 "enable": false 00:18:59.322 } 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "method": "bdev_malloc_create", 00:18:59.322 "params": { 00:18:59.322 "name": "malloc0", 00:18:59.322 "num_blocks": 8192, 00:18:59.322 "block_size": 4096, 00:18:59.322 "physical_block_size": 4096, 00:18:59.322 "uuid": "37d61ede-ce0d-492d-bdac-ef08a2763a3c", 00:18:59.322 "optimal_io_boundary": 0 00:18:59.322 } 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "method": "bdev_wait_for_examine" 00:18:59.322 } 00:18:59.322 ] 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "subsystem": "nbd", 00:18:59.322 "config": [] 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "subsystem": "scheduler", 00:18:59.322 "config": [ 00:18:59.322 { 00:18:59.322 "method": "framework_set_scheduler", 00:18:59.322 "params": { 00:18:59.322 "name": "static" 00:18:59.322 } 00:18:59.322 } 00:18:59.322 ] 00:18:59.322 }, 00:18:59.322 { 00:18:59.322 "subsystem": "nvmf", 00:18:59.322 "config": [ 00:18:59.322 { 00:18:59.322 "method": "nvmf_set_config", 00:18:59.322 "params": { 00:18:59.322 "discovery_filter": "match_any", 00:18:59.322 "admin_cmd_passthru": { 00:18:59.322 "identify_ctrlr": false 00:18:59.322 } 00:18:59.322 } 00:18:59.322 }, 00:18:59.322 { 00:18:59.323 "method": "nvmf_set_max_subsystems", 00:18:59.323 "params": { 00:18:59.323 "max_subsystems": 1024 00:18:59.323 } 00:18:59.323 }, 00:18:59.323 { 00:18:59.323 "method": "nvmf_set_crdt", 00:18:59.323 "params": { 00:18:59.323 "crdt1": 0, 00:18:59.323 "crdt2": 0, 00:18:59.323 "crdt3": 0 00:18:59.323 } 00:18:59.323 }, 00:18:59.323 { 00:18:59.323 "method": "nvmf_create_transport", 00:18:59.323 "params": { 00:18:59.323 "trtype": "TCP", 00:18:59.323 "max_queue_depth": 128, 00:18:59.323 "max_io_qpairs_per_ctrlr": 127, 00:18:59.323 "in_capsule_data_size": 4096, 00:18:59.323 "max_io_size": 131072, 00:18:59.323 "io_unit_size": 131072, 00:18:59.323 "max_aq_depth": 128, 00:18:59.323 "num_shared_buffers": 511, 00:18:59.323 "buf_cache_size": 4294967295, 00:18:59.323 "dif_insert_or_strip": false, 00:18:59.323 "zcopy": false, 00:18:59.323 "c2h_success": false, 00:18:59.323 "sock_priority": 0, 00:18:59.323 "abort_timeout_sec": 1, 00:18:59.323 "ack_timeout": 0, 00:18:59.323 "data_wr_pool_size": 0 00:18:59.323 } 00:18:59.323 }, 00:18:59.323 { 00:18:59.323 "method": "nvmf_create_subsystem", 00:18:59.323 "params": { 00:18:59.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.323 "allow_any_host": false, 00:18:59.323 "serial_number": "00000000000000000000", 00:18:59.323 "model_number": "SPDK bdev Controller", 00:18:59.323 "max_namespaces": 32, 00:18:59.323 "min_cntlid": 1, 00:18:59.323 "max_cntlid": 65519, 00:18:59.323 "ana_reporting": false 00:18:59.323 } 00:18:59.323 }, 00:18:59.323 { 00:18:59.323 "method": "nvmf_subsystem_add_host", 00:18:59.323 "params": { 00:18:59.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.323 "host": "nqn.2016-06.io.spdk:host1", 00:18:59.323 "psk": "key0" 00:18:59.323 } 00:18:59.323 }, 00:18:59.323 { 00:18:59.323 "method": "nvmf_subsystem_add_ns", 00:18:59.323 "params": { 00:18:59.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.323 "namespace": { 00:18:59.323 "nsid": 1, 00:18:59.323 "bdev_name": "malloc0", 00:18:59.323 "nguid": "37D61EDECE0D492DBDACEF08A2763A3C", 00:18:59.323 "uuid": "37d61ede-ce0d-492d-bdac-ef08a2763a3c", 00:18:59.323 "no_auto_visible": false 00:18:59.323 } 00:18:59.323 } 00:18:59.323 }, 00:18:59.323 { 00:18:59.323 "method": "nvmf_subsystem_add_listener", 00:18:59.323 "params": { 00:18:59.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.323 "listen_address": { 00:18:59.323 "trtype": "TCP", 00:18:59.323 "adrfam": "IPv4", 00:18:59.323 "traddr": "10.0.0.2", 00:18:59.323 "trsvcid": "4420" 00:18:59.323 }, 00:18:59.323 "secure_channel": true 00:18:59.323 } 00:18:59.323 } 00:18:59.323 ] 00:18:59.323 } 00:18:59.323 ] 00:18:59.323 }' 00:18:59.323 10:31:53 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:59.584 10:31:54 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:59.584 "subsystems": [ 00:18:59.584 { 00:18:59.584 "subsystem": "keyring", 00:18:59.584 "config": [ 00:18:59.584 { 00:18:59.584 "method": "keyring_file_add_key", 00:18:59.584 "params": { 00:18:59.584 "name": "key0", 00:18:59.584 "path": "/tmp/tmp.AZAv4DftTx" 00:18:59.584 } 00:18:59.584 } 00:18:59.584 ] 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "subsystem": "iobuf", 00:18:59.584 "config": [ 00:18:59.584 { 00:18:59.584 "method": "iobuf_set_options", 00:18:59.584 "params": { 00:18:59.584 "small_pool_count": 8192, 00:18:59.584 "large_pool_count": 1024, 00:18:59.584 "small_bufsize": 8192, 00:18:59.584 "large_bufsize": 135168 00:18:59.584 } 00:18:59.584 } 00:18:59.584 ] 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "subsystem": "sock", 00:18:59.584 "config": [ 00:18:59.584 { 00:18:59.584 "method": "sock_set_default_impl", 00:18:59.584 "params": { 00:18:59.584 "impl_name": "posix" 00:18:59.584 } 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "method": "sock_impl_set_options", 00:18:59.584 "params": { 00:18:59.584 "impl_name": "ssl", 00:18:59.584 "recv_buf_size": 4096, 00:18:59.584 "send_buf_size": 4096, 00:18:59.584 "enable_recv_pipe": true, 00:18:59.584 "enable_quickack": false, 00:18:59.584 "enable_placement_id": 0, 00:18:59.584 "enable_zerocopy_send_server": true, 00:18:59.584 "enable_zerocopy_send_client": false, 00:18:59.584 "zerocopy_threshold": 0, 00:18:59.584 "tls_version": 0, 00:18:59.584 "enable_ktls": false 00:18:59.584 } 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "method": "sock_impl_set_options", 00:18:59.584 "params": { 00:18:59.584 "impl_name": "posix", 00:18:59.584 "recv_buf_size": 2097152, 00:18:59.584 "send_buf_size": 2097152, 00:18:59.584 "enable_recv_pipe": true, 00:18:59.584 "enable_quickack": false, 00:18:59.584 "enable_placement_id": 0, 00:18:59.584 "enable_zerocopy_send_server": true, 00:18:59.584 "enable_zerocopy_send_client": false, 00:18:59.584 "zerocopy_threshold": 0, 00:18:59.584 "tls_version": 0, 00:18:59.584 "enable_ktls": false 00:18:59.584 } 00:18:59.584 } 00:18:59.584 ] 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "subsystem": "vmd", 00:18:59.584 "config": [] 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "subsystem": "accel", 00:18:59.584 "config": [ 00:18:59.584 { 00:18:59.584 "method": "accel_set_options", 00:18:59.584 "params": { 00:18:59.584 "small_cache_size": 128, 00:18:59.584 "large_cache_size": 16, 00:18:59.584 "task_count": 2048, 00:18:59.584 "sequence_count": 2048, 00:18:59.584 "buf_count": 2048 00:18:59.584 } 00:18:59.584 } 00:18:59.584 ] 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "subsystem": "bdev", 00:18:59.584 "config": [ 00:18:59.584 { 00:18:59.584 "method": "bdev_set_options", 00:18:59.584 "params": { 00:18:59.584 "bdev_io_pool_size": 65535, 00:18:59.584 "bdev_io_cache_size": 256, 00:18:59.584 "bdev_auto_examine": true, 00:18:59.584 "iobuf_small_cache_size": 128, 00:18:59.584 "iobuf_large_cache_size": 16 00:18:59.584 } 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "method": "bdev_raid_set_options", 00:18:59.584 "params": { 00:18:59.584 "process_window_size_kb": 1024 00:18:59.584 } 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "method": "bdev_iscsi_set_options", 00:18:59.584 "params": { 00:18:59.584 "timeout_sec": 30 00:18:59.584 } 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "method": "bdev_nvme_set_options", 00:18:59.584 "params": { 00:18:59.584 "action_on_timeout": "none", 00:18:59.584 "timeout_us": 0, 00:18:59.584 "timeout_admin_us": 0, 00:18:59.584 "keep_alive_timeout_ms": 10000, 00:18:59.584 "arbitration_burst": 0, 00:18:59.584 "low_priority_weight": 0, 00:18:59.584 "medium_priority_weight": 0, 00:18:59.584 "high_priority_weight": 0, 00:18:59.584 "nvme_adminq_poll_period_us": 10000, 00:18:59.584 "nvme_ioq_poll_period_us": 0, 00:18:59.584 "io_queue_requests": 512, 00:18:59.584 "delay_cmd_submit": true, 00:18:59.584 "transport_retry_count": 4, 00:18:59.584 "bdev_retry_count": 3, 00:18:59.584 "transport_ack_timeout": 0, 00:18:59.584 "ctrlr_loss_timeout_sec": 0, 00:18:59.584 "reconnect_delay_sec": 0, 00:18:59.584 "fast_io_fail_timeout_sec": 0, 00:18:59.584 "disable_auto_failback": false, 00:18:59.584 "generate_uuids": false, 00:18:59.584 "transport_tos": 0, 00:18:59.584 "nvme_error_stat": false, 00:18:59.584 "rdma_srq_size": 0, 00:18:59.584 "io_path_stat": false, 00:18:59.584 "allow_accel_sequence": false, 00:18:59.584 "rdma_max_cq_size": 0, 00:18:59.584 "rdma_cm_event_timeout_ms": 0, 00:18:59.584 "dhchap_digests": [ 00:18:59.584 "sha256", 00:18:59.584 "sha384", 00:18:59.584 "sha512" 00:18:59.584 ], 00:18:59.584 "dhchap_dhgroups": [ 00:18:59.584 "null", 00:18:59.584 "ffdhe2048", 00:18:59.584 "ffdhe3072", 00:18:59.584 "ffdhe4096", 00:18:59.584 "ffdhe6144", 00:18:59.584 "ffdhe8192" 00:18:59.584 ] 00:18:59.584 } 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "method": "bdev_nvme_attach_controller", 00:18:59.584 "params": { 00:18:59.584 "name": "nvme0", 00:18:59.584 "trtype": "TCP", 00:18:59.584 "adrfam": "IPv4", 00:18:59.584 "traddr": "10.0.0.2", 00:18:59.584 "trsvcid": "4420", 00:18:59.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.584 "prchk_reftag": false, 00:18:59.584 "prchk_guard": false, 00:18:59.584 "ctrlr_loss_timeout_sec": 0, 00:18:59.584 "reconnect_delay_sec": 0, 00:18:59.584 "fast_io_fail_timeout_sec": 0, 00:18:59.584 "psk": "key0", 00:18:59.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.584 "hdgst": false, 00:18:59.584 "ddgst": false 00:18:59.584 } 00:18:59.584 }, 00:18:59.584 { 00:18:59.584 "method": "bdev_nvme_set_hotplug", 00:18:59.584 "params": { 00:18:59.584 "period_us": 100000, 00:18:59.585 "enable": false 00:18:59.585 } 00:18:59.585 }, 00:18:59.585 { 00:18:59.585 "method": "bdev_enable_histogram", 00:18:59.585 "params": { 00:18:59.585 "name": "nvme0n1", 00:18:59.585 "enable": true 00:18:59.585 } 00:18:59.585 }, 00:18:59.585 { 00:18:59.585 "method": "bdev_wait_for_examine" 00:18:59.585 } 00:18:59.585 ] 00:18:59.585 }, 00:18:59.585 { 00:18:59.585 "subsystem": "nbd", 00:18:59.585 "config": [] 00:18:59.585 } 00:18:59.585 ] 00:18:59.585 }' 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2338335 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2338335 ']' 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2338335 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2338335 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2338335' 00:18:59.585 killing process with pid 2338335 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2338335 00:18:59.585 Received shutdown signal, test time was about 1.000000 seconds 00:18:59.585 00:18:59.585 Latency(us) 00:18:59.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.585 =================================================================================================================== 00:18:59.585 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.585 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2338335 00:18:59.843 10:31:54 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2338186 00:18:59.843 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2338186 ']' 00:18:59.843 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2338186 00:18:59.843 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:59.844 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.844 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2338186 00:18:59.844 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:59.844 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:59.844 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2338186' 00:18:59.844 killing process with pid 2338186 00:18:59.844 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2338186 00:18:59.844 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2338186 00:19:00.102 10:31:54 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:00.102 10:31:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.102 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:00.102 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.102 10:31:54 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:00.102 "subsystems": [ 00:19:00.102 { 00:19:00.102 "subsystem": "keyring", 00:19:00.102 "config": [ 00:19:00.102 { 00:19:00.102 "method": "keyring_file_add_key", 00:19:00.102 "params": { 00:19:00.102 "name": "key0", 00:19:00.102 "path": "/tmp/tmp.AZAv4DftTx" 00:19:00.102 } 00:19:00.102 } 00:19:00.102 ] 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "subsystem": "iobuf", 00:19:00.102 "config": [ 00:19:00.102 { 00:19:00.102 "method": "iobuf_set_options", 00:19:00.102 "params": { 00:19:00.102 "small_pool_count": 8192, 00:19:00.102 "large_pool_count": 1024, 00:19:00.102 "small_bufsize": 8192, 00:19:00.102 "large_bufsize": 135168 00:19:00.102 } 00:19:00.102 } 00:19:00.102 ] 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "subsystem": "sock", 00:19:00.102 "config": [ 00:19:00.102 { 00:19:00.102 "method": "sock_set_default_impl", 00:19:00.102 "params": { 00:19:00.102 "impl_name": "posix" 00:19:00.102 } 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "method": "sock_impl_set_options", 00:19:00.102 "params": { 00:19:00.102 "impl_name": "ssl", 00:19:00.102 "recv_buf_size": 4096, 00:19:00.102 "send_buf_size": 4096, 00:19:00.102 "enable_recv_pipe": true, 00:19:00.102 "enable_quickack": false, 00:19:00.102 "enable_placement_id": 0, 00:19:00.102 "enable_zerocopy_send_server": true, 00:19:00.102 "enable_zerocopy_send_client": false, 00:19:00.102 "zerocopy_threshold": 0, 00:19:00.102 "tls_version": 0, 00:19:00.102 "enable_ktls": false 00:19:00.102 } 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "method": "sock_impl_set_options", 00:19:00.102 "params": { 00:19:00.102 "impl_name": "posix", 00:19:00.102 "recv_buf_size": 2097152, 00:19:00.102 "send_buf_size": 2097152, 00:19:00.102 "enable_recv_pipe": true, 00:19:00.102 "enable_quickack": false, 00:19:00.102 "enable_placement_id": 0, 00:19:00.102 "enable_zerocopy_send_server": true, 00:19:00.102 "enable_zerocopy_send_client": false, 00:19:00.102 "zerocopy_threshold": 0, 00:19:00.102 "tls_version": 0, 00:19:00.102 "enable_ktls": false 00:19:00.102 } 00:19:00.102 } 00:19:00.102 ] 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "subsystem": "vmd", 00:19:00.102 "config": [] 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "subsystem": "accel", 00:19:00.102 "config": [ 00:19:00.102 { 00:19:00.102 "method": "accel_set_options", 00:19:00.102 "params": { 00:19:00.102 "small_cache_size": 128, 00:19:00.102 "large_cache_size": 16, 00:19:00.102 "task_count": 2048, 00:19:00.102 "sequence_count": 2048, 00:19:00.102 "buf_count": 2048 00:19:00.102 } 00:19:00.102 } 00:19:00.102 ] 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "subsystem": "bdev", 00:19:00.102 "config": [ 00:19:00.102 { 00:19:00.102 "method": "bdev_set_options", 00:19:00.102 "params": { 00:19:00.102 "bdev_io_pool_size": 65535, 00:19:00.102 "bdev_io_cache_size": 256, 00:19:00.102 "bdev_auto_examine": true, 00:19:00.102 "iobuf_small_cache_size": 128, 00:19:00.102 "iobuf_large_cache_size": 16 00:19:00.102 } 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "method": "bdev_raid_set_options", 00:19:00.102 "params": { 00:19:00.102 "process_window_size_kb": 1024 00:19:00.102 } 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "method": "bdev_iscsi_set_options", 00:19:00.102 "params": { 00:19:00.102 "timeout_sec": 30 00:19:00.102 } 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "method": "bdev_nvme_set_options", 00:19:00.102 "params": { 00:19:00.102 "action_on_timeout": "none", 00:19:00.102 "timeout_us": 0, 00:19:00.102 "timeout_admin_us": 0, 00:19:00.102 "keep_alive_timeout_ms": 10000, 00:19:00.102 "arbitration_burst": 0, 00:19:00.102 "low_priority_weight": 0, 00:19:00.102 "medium_priority_weight": 0, 00:19:00.102 "high_priority_weight": 0, 00:19:00.102 "nvme_adminq_poll_period_us": 10000, 00:19:00.102 "nvme_ioq_poll_period_us": 0, 00:19:00.102 "io_queue_requests": 0, 00:19:00.102 "delay_cmd_submit": true, 00:19:00.102 "transport_retry_count": 4, 00:19:00.102 "bdev_retry_count": 3, 00:19:00.102 "transport_ack_timeout": 0, 00:19:00.102 "ctrlr_loss_timeout_sec": 0, 00:19:00.102 "reconnect_delay_sec": 0, 00:19:00.102 "fast_io_fail_timeout_sec": 0, 00:19:00.102 "disable_auto_failback": false, 00:19:00.102 "generate_uuids": false, 00:19:00.102 "transport_tos": 0, 00:19:00.102 "nvme_error_stat": false, 00:19:00.102 "rdma_srq_size": 0, 00:19:00.102 "io_path_stat": false, 00:19:00.102 "allow_accel_sequence": false, 00:19:00.102 "rdma_max_cq_size": 0, 00:19:00.102 "rdma_cm_event_timeout_ms": 0, 00:19:00.102 "dhchap_digests": [ 00:19:00.102 "sha256", 00:19:00.102 "sha384", 00:19:00.102 "sha512" 00:19:00.102 ], 00:19:00.102 "dhchap_dhgroups": [ 00:19:00.102 "null", 00:19:00.102 "ffdhe2048", 00:19:00.102 "ffdhe3072", 00:19:00.102 "ffdhe4096", 00:19:00.102 "ffdhe6144", 00:19:00.102 "ffdhe8192" 00:19:00.102 ] 00:19:00.102 } 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "method": "bdev_nvme_set_hotplug", 00:19:00.102 "params": { 00:19:00.102 "period_us": 100000, 00:19:00.102 "enable": false 00:19:00.102 } 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "method": "bdev_malloc_create", 00:19:00.102 "params": { 00:19:00.102 "name": "malloc0", 00:19:00.102 "num_blocks": 8192, 00:19:00.102 "block_size": 4096, 00:19:00.102 "physical_block_size": 4096, 00:19:00.102 "uuid": "37d61ede-ce0d-492d-bdac-ef08a2763a3c", 00:19:00.102 "optimal_io_boundary": 0 00:19:00.102 } 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "method": "bdev_wait_for_examine" 00:19:00.102 } 00:19:00.102 ] 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "subsystem": "nbd", 00:19:00.102 "config": [] 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "subsystem": "scheduler", 00:19:00.102 "config": [ 00:19:00.102 { 00:19:00.102 "method": "framework_set_scheduler", 00:19:00.102 "params": { 00:19:00.102 "name": "static" 00:19:00.102 } 00:19:00.102 } 00:19:00.102 ] 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "subsystem": "nvmf", 00:19:00.102 "config": [ 00:19:00.102 { 00:19:00.102 "method": "nvmf_set_config", 00:19:00.102 "params": { 00:19:00.102 "discovery_filter": "match_any", 00:19:00.102 "admin_cmd_passthru": { 00:19:00.102 "identify_ctrlr": false 00:19:00.102 } 00:19:00.102 } 00:19:00.102 }, 00:19:00.102 { 00:19:00.102 "method": "nvmf_set_max_subsystems", 00:19:00.102 "params": { 00:19:00.102 "max_subsystems": 1024 00:19:00.102 } 00:19:00.102 }, 00:19:00.102 { 00:19:00.103 "method": "nvmf_set_crdt", 00:19:00.103 "params": { 00:19:00.103 "crdt1": 0, 00:19:00.103 "crdt2": 0, 00:19:00.103 "crdt3": 0 00:19:00.103 } 00:19:00.103 }, 00:19:00.103 { 00:19:00.103 "method": "nvmf_create_transport", 00:19:00.103 "params": { 00:19:00.103 "trtype": "TCP", 00:19:00.103 "max_queue_depth": 128, 00:19:00.103 "max_io_qpairs_per_ctrlr": 127, 00:19:00.103 "in_capsule_data_size": 4096, 00:19:00.103 "max_io_size": 131072, 00:19:00.103 "io_unit_size": 131072, 00:19:00.103 "max_aq_depth": 128, 00:19:00.103 "num_shared_buffers": 511, 00:19:00.103 "buf_cache_size": 4294967295, 00:19:00.103 "dif_insert_or_strip": false, 00:19:00.103 "zcopy": false, 00:19:00.103 "c2h_success": false, 00:19:00.103 "sock_priority": 0, 00:19:00.103 "abort_timeout_sec": 1, 00:19:00.103 "ack_timeout": 0, 00:19:00.103 "data_wr_pool_size": 0 00:19:00.103 } 00:19:00.103 }, 00:19:00.103 { 00:19:00.103 "method": "nvmf_create_subsystem", 00:19:00.103 "params": { 00:19:00.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.103 "allow_any_host": false, 00:19:00.103 "serial_number": "00000000000000000000", 00:19:00.103 "model_number": "SPDK bdev Controller", 00:19:00.103 "max_namespaces": 32, 00:19:00.103 "min_cntlid": 1, 00:19:00.103 "max_cntlid": 65519, 00:19:00.103 "ana_reporting": false 00:19:00.103 } 00:19:00.103 }, 00:19:00.103 { 00:19:00.103 "method": "nvmf_subsystem_add_host", 00:19:00.103 "params": { 00:19:00.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.103 "host": "nqn.2016-06.io.spdk:host1", 00:19:00.103 "psk": "key0" 00:19:00.103 } 00:19:00.103 }, 00:19:00.103 { 00:19:00.103 "method": "nvmf_subsystem_add_ns", 00:19:00.103 "params": { 00:19:00.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.103 "namespace": { 00:19:00.103 "nsid": 1, 00:19:00.103 "bdev_name": "malloc0", 00:19:00.103 "nguid": "37D61EDECE0D492DBDACEF08A2763A3C", 00:19:00.103 "uuid": "37d61ede-ce0d-492d-bdac-ef08a2763a3c", 00:19:00.103 "no_auto_visible": false 00:19:00.103 } 00:19:00.103 } 00:19:00.103 }, 00:19:00.103 { 00:19:00.103 "method": "nvmf_subsystem_add_listener", 00:19:00.103 "params": { 00:19:00.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.103 "listen_address": { 00:19:00.103 "trtype": "TCP", 00:19:00.103 "adrfam": "IPv4", 00:19:00.103 "traddr": "10.0.0.2", 00:19:00.103 "trsvcid": "4420" 00:19:00.103 }, 00:19:00.103 "secure_channel": true 00:19:00.103 } 00:19:00.103 } 00:19:00.103 ] 00:19:00.103 } 00:19:00.103 ] 00:19:00.103 }' 00:19:00.103 10:31:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2338743 00:19:00.103 10:31:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:00.103 10:31:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2338743 00:19:00.103 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2338743 ']' 00:19:00.103 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.103 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.103 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.103 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.103 10:31:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.375 [2024-07-15 10:31:54.761464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:00.375 [2024-07-15 10:31:54.761541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.375 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.375 [2024-07-15 10:31:54.826836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.375 [2024-07-15 10:31:54.932016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.375 [2024-07-15 10:31:54.932070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.375 [2024-07-15 10:31:54.932083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.375 [2024-07-15 10:31:54.932093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.375 [2024-07-15 10:31:54.932102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.375 [2024-07-15 10:31:54.932176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.656 [2024-07-15 10:31:55.170222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.656 [2024-07-15 10:31:55.202226] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.656 [2024-07-15 10:31:55.211082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2338832 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2338832 /var/tmp/bdevperf.sock 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2338832 ']' 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.224 10:31:55 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:19:01.224 "subsystems": [ 00:19:01.224 { 00:19:01.224 "subsystem": "keyring", 00:19:01.224 "config": [ 00:19:01.224 { 00:19:01.224 "method": "keyring_file_add_key", 00:19:01.224 "params": { 00:19:01.224 "name": "key0", 00:19:01.224 "path": "/tmp/tmp.AZAv4DftTx" 00:19:01.224 } 00:19:01.224 } 00:19:01.224 ] 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "subsystem": "iobuf", 00:19:01.224 "config": [ 00:19:01.224 { 00:19:01.224 "method": "iobuf_set_options", 00:19:01.224 "params": { 00:19:01.224 "small_pool_count": 8192, 00:19:01.224 "large_pool_count": 1024, 00:19:01.224 "small_bufsize": 8192, 00:19:01.224 "large_bufsize": 135168 00:19:01.224 } 00:19:01.224 } 00:19:01.224 ] 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "subsystem": "sock", 00:19:01.224 "config": [ 00:19:01.224 { 00:19:01.224 "method": "sock_set_default_impl", 00:19:01.224 "params": { 00:19:01.224 "impl_name": "posix" 00:19:01.224 } 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "method": "sock_impl_set_options", 00:19:01.224 "params": { 00:19:01.224 "impl_name": "ssl", 00:19:01.224 "recv_buf_size": 4096, 00:19:01.224 "send_buf_size": 4096, 00:19:01.224 "enable_recv_pipe": true, 00:19:01.224 "enable_quickack": false, 00:19:01.224 "enable_placement_id": 0, 00:19:01.224 "enable_zerocopy_send_server": true, 00:19:01.224 "enable_zerocopy_send_client": false, 00:19:01.224 "zerocopy_threshold": 0, 00:19:01.224 "tls_version": 0, 00:19:01.224 "enable_ktls": false 00:19:01.224 } 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "method": "sock_impl_set_options", 00:19:01.224 "params": { 00:19:01.224 "impl_name": "posix", 00:19:01.224 "recv_buf_size": 2097152, 00:19:01.224 "send_buf_size": 2097152, 00:19:01.224 "enable_recv_pipe": true, 00:19:01.224 "enable_quickack": false, 00:19:01.224 "enable_placement_id": 0, 00:19:01.224 "enable_zerocopy_send_server": true, 00:19:01.224 "enable_zerocopy_send_client": false, 00:19:01.224 "zerocopy_threshold": 0, 00:19:01.224 "tls_version": 0, 00:19:01.224 "enable_ktls": false 00:19:01.224 } 00:19:01.224 } 00:19:01.224 ] 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "subsystem": "vmd", 00:19:01.224 "config": [] 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "subsystem": "accel", 00:19:01.224 "config": [ 00:19:01.224 { 00:19:01.224 "method": "accel_set_options", 00:19:01.224 "params": { 00:19:01.224 "small_cache_size": 128, 00:19:01.224 "large_cache_size": 16, 00:19:01.224 "task_count": 2048, 00:19:01.224 "sequence_count": 2048, 00:19:01.224 "buf_count": 2048 00:19:01.224 } 00:19:01.224 } 00:19:01.224 ] 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "subsystem": "bdev", 00:19:01.224 "config": [ 00:19:01.224 { 00:19:01.224 "method": "bdev_set_options", 00:19:01.224 "params": { 00:19:01.224 "bdev_io_pool_size": 65535, 00:19:01.224 "bdev_io_cache_size": 256, 00:19:01.224 "bdev_auto_examine": true, 00:19:01.224 "iobuf_small_cache_size": 128, 00:19:01.224 "iobuf_large_cache_size": 16 00:19:01.224 } 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "method": "bdev_raid_set_options", 00:19:01.224 "params": { 00:19:01.224 "process_window_size_kb": 1024 00:19:01.224 } 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "method": "bdev_iscsi_set_options", 00:19:01.224 "params": { 00:19:01.224 "timeout_sec": 30 00:19:01.224 } 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "method": "bdev_nvme_set_options", 00:19:01.224 "params": { 00:19:01.224 "action_on_timeout": "none", 00:19:01.224 "timeout_us": 0, 00:19:01.224 "timeout_admin_us": 0, 00:19:01.224 "keep_alive_timeout_ms": 10000, 00:19:01.224 "arbitration_burst": 0, 00:19:01.224 "low_priority_weight": 0, 00:19:01.224 "medium_priority_weight": 0, 00:19:01.224 "high_priority_weight": 0, 00:19:01.224 "nvme_adminq_poll_period_us": 10000, 00:19:01.224 "nvme_ioq_poll_period_us": 0, 00:19:01.224 "io_queue_requests": 512, 00:19:01.224 "delay_cmd_submit": true, 00:19:01.224 "transport_retry_count": 4, 00:19:01.224 "bdev_retry_count": 3, 00:19:01.224 "transport_ack_timeout": 0, 00:19:01.224 "ctrlr_loss_timeout_sec": 0, 00:19:01.224 "reconnect_delay_sec": 0, 00:19:01.224 "fast_io_fail_timeout_sec": 0, 00:19:01.224 "disable_auto_failback": false, 00:19:01.224 "generate_uuids": false, 00:19:01.224 "transport_tos": 0, 00:19:01.224 "nvme_error_stat": false, 00:19:01.224 "rdma_srq_size": 0, 00:19:01.224 "io_path_stat": false, 00:19:01.224 "allow_accel_sequence": false, 00:19:01.224 "rdma_max_cq_size": 0, 00:19:01.224 "rdma_cm_event_timeout_ms": 0, 00:19:01.224 "dhchap_digests": [ 00:19:01.224 "sha256", 00:19:01.224 "sha384", 00:19:01.224 "sha512" 00:19:01.224 ], 00:19:01.224 "dhchap_dhgroups": [ 00:19:01.224 "null", 00:19:01.224 "ffdhe2048", 00:19:01.224 "ffdhe3072", 00:19:01.224 "ffdhe4096", 00:19:01.224 "ffdhe6144", 00:19:01.224 "ffdhe8192" 00:19:01.224 ] 00:19:01.224 } 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "method": "bdev_nvme_attach_controller", 00:19:01.224 "params": { 00:19:01.224 "name": "nvme0", 00:19:01.224 "trtype": "TCP", 00:19:01.224 "adrfam": "IPv4", 00:19:01.224 "traddr": "10.0.0.2", 00:19:01.224 "trsvcid": "4420", 00:19:01.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.224 "prchk_reftag": false, 00:19:01.224 "prchk_guard": false, 00:19:01.224 "ctrlr_loss_timeout_sec": 0, 00:19:01.224 "reconnect_delay_sec": 0, 00:19:01.224 "fast_io_fail_timeout_sec": 0, 00:19:01.224 "psk": "key0", 00:19:01.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.224 "hdgst": false, 00:19:01.224 "ddgst": false 00:19:01.224 } 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "method": "bdev_nvme_set_hotplug", 00:19:01.224 "params": { 00:19:01.224 "period_us": 100000, 00:19:01.224 "enable": false 00:19:01.224 } 00:19:01.224 }, 00:19:01.224 { 00:19:01.224 "method": "bdev_enable_histogram", 00:19:01.224 "params": { 00:19:01.224 "name": "nvme0n1", 00:19:01.224 "enable": true 00:19:01.224 } 00:19:01.224 }, 00:19:01.225 { 00:19:01.225 "method": "bdev_wait_for_examine" 00:19:01.225 } 00:19:01.225 ] 00:19:01.225 }, 00:19:01.225 { 00:19:01.225 "subsystem": "nbd", 00:19:01.225 "config": [] 00:19:01.225 } 00:19:01.225 ] 00:19:01.225 }' 00:19:01.225 10:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.225 10:31:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.225 [2024-07-15 10:31:55.783911] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:01.225 [2024-07-15 10:31:55.784014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338832 ] 00:19:01.225 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.225 [2024-07-15 10:31:55.846503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.483 [2024-07-15 10:31:55.963143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.740 [2024-07-15 10:31:56.149547] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.304 10:31:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.304 10:31:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:02.304 10:31:56 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:02.304 10:31:56 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:02.602 10:31:56 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.602 10:31:56 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:02.602 Running I/O for 1 seconds... 00:19:03.539 00:19:03.539 Latency(us) 00:19:03.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.539 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:03.539 Verification LBA range: start 0x0 length 0x2000 00:19:03.539 nvme0n1 : 1.05 2778.53 10.85 0.00 0.00 45057.28 11893.57 64468.01 00:19:03.539 =================================================================================================================== 00:19:03.539 Total : 2778.53 10.85 0.00 0.00 45057.28 11893.57 64468.01 00:19:03.539 0 00:19:03.539 10:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:03.539 10:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:19:03.539 10:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:03.539 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:03.540 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:03.540 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:03.540 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:03.540 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:03.540 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:03.540 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:03.540 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:03.540 nvmf_trace.0 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2338832 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2338832 ']' 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2338832 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2338832 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2338832' 00:19:03.798 killing process with pid 2338832 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2338832 00:19:03.798 Received shutdown signal, test time was about 1.000000 seconds 00:19:03.798 00:19:03.798 Latency(us) 00:19:03.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.798 =================================================================================================================== 00:19:03.798 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.798 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2338832 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:04.058 rmmod nvme_tcp 00:19:04.058 rmmod nvme_fabrics 00:19:04.058 rmmod nvme_keyring 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2338743 ']' 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2338743 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2338743 ']' 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2338743 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2338743 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2338743' 00:19:04.058 killing process with pid 2338743 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2338743 00:19:04.058 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2338743 00:19:04.316 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:04.317 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:04.317 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:04.317 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.317 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.317 10:31:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.317 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.317 10:31:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.851 10:32:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:06.851 10:32:00 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TQBXgr3MjH /tmp/tmp.GCKoofI4BP /tmp/tmp.AZAv4DftTx 00:19:06.851 00:19:06.851 real 1m24.156s 00:19:06.851 user 2m15.034s 00:19:06.851 sys 0m27.481s 00:19:06.851 10:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:06.851 10:32:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.851 ************************************ 00:19:06.851 END TEST nvmf_tls 00:19:06.851 ************************************ 00:19:06.851 10:32:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:06.851 10:32:01 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:06.851 10:32:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:06.851 10:32:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.851 10:32:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.851 ************************************ 00:19:06.851 START TEST nvmf_fips 00:19:06.851 ************************************ 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:06.851 * Looking for test storage... 00:19:06.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.851 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:06.852 Error setting digest 00:19:06.852 00A2ADB4E07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:06.852 00A2ADB4E07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.852 10:32:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:08.754 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:08.754 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:08.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:08.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:08.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:19:08.754 00:19:08.754 --- 10.0.0.2 ping statistics --- 00:19:08.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.754 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:19:08.754 00:19:08.754 --- 10.0.0.1 ping statistics --- 00:19:08.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.754 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2341143 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2341143 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2341143 ']' 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.754 10:32:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:09.012 [2024-07-15 10:32:03.428521] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:09.012 [2024-07-15 10:32:03.428610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.012 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.012 [2024-07-15 10:32:03.490847] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.012 [2024-07-15 10:32:03.594817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.012 [2024-07-15 10:32:03.594873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.012 [2024-07-15 10:32:03.594903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.012 [2024-07-15 10:32:03.594915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.012 [2024-07-15 10:32:03.594924] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.012 [2024-07-15 10:32:03.594960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:09.946 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.204 [2024-07-15 10:32:04.682218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.204 [2024-07-15 10:32:04.698210] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.204 [2024-07-15 10:32:04.698433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.204 [2024-07-15 10:32:04.730775] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:10.204 malloc0 00:19:10.204 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.204 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2341304 00:19:10.204 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.204 10:32:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2341304 /var/tmp/bdevperf.sock 00:19:10.204 10:32:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2341304 ']' 00:19:10.204 10:32:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.204 10:32:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.204 10:32:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.204 10:32:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.204 10:32:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:10.204 [2024-07-15 10:32:04.823838] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:10.204 [2024-07-15 10:32:04.823952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341304 ] 00:19:10.461 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.461 [2024-07-15 10:32:04.885571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.461 [2024-07-15 10:32:04.991561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.397 10:32:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.397 10:32:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:11.397 10:32:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:11.397 [2024-07-15 10:32:05.987921] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.397 [2024-07-15 10:32:05.988035] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:11.656 TLSTESTn1 00:19:11.656 10:32:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.656 Running I/O for 10 seconds... 00:19:21.629 00:19:21.629 Latency(us) 00:19:21.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.629 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.629 Verification LBA range: start 0x0 length 0x2000 00:19:21.629 TLSTESTn1 : 10.04 3113.35 12.16 0.00 0.00 41012.63 7233.23 59807.67 00:19:21.629 =================================================================================================================== 00:19:21.629 Total : 3113.35 12.16 0.00 0.00 41012.63 7233.23 59807.67 00:19:21.629 0 00:19:21.629 10:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:21.629 10:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:21.629 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:21.629 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:21.629 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:21.629 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:21.629 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:21.629 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:21.629 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:21.629 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:21.629 nvmf_trace.0 00:19:21.888 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2341304 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2341304 ']' 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2341304 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2341304 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2341304' 00:19:21.889 killing process with pid 2341304 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2341304 00:19:21.889 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.889 00:19:21.889 Latency(us) 00:19:21.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.889 =================================================================================================================== 00:19:21.889 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.889 [2024-07-15 10:32:16.351132] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:21.889 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2341304 00:19:22.147 10:32:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:22.147 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:22.147 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:22.147 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:22.147 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:22.147 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:22.148 rmmod nvme_tcp 00:19:22.148 rmmod nvme_fabrics 00:19:22.148 rmmod nvme_keyring 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2341143 ']' 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2341143 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2341143 ']' 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2341143 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2341143 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2341143' 00:19:22.148 killing process with pid 2341143 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2341143 00:19:22.148 [2024-07-15 10:32:16.715596] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:22.148 10:32:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2341143 00:19:22.412 10:32:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:22.412 10:32:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:22.412 10:32:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:22.412 10:32:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.412 10:32:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:22.412 10:32:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.412 10:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.412 10:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.985 10:32:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:24.985 10:32:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:24.985 00:19:24.985 real 0m18.013s 00:19:24.985 user 0m23.604s 00:19:24.985 sys 0m5.950s 00:19:24.985 10:32:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:24.985 10:32:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:24.985 ************************************ 00:19:24.985 END TEST nvmf_fips 00:19:24.985 ************************************ 00:19:24.985 10:32:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:24.985 10:32:19 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:24.985 10:32:19 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:24.985 10:32:19 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:24.985 10:32:19 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:24.985 10:32:19 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:24.985 10:32:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.882 10:32:21 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:26.883 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:26.883 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:26.883 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:26.883 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:26.883 10:32:21 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:26.883 10:32:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:26.883 10:32:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.883 10:32:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:26.883 ************************************ 00:19:26.883 START TEST nvmf_perf_adq 00:19:26.883 ************************************ 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:26.883 * Looking for test storage... 00:19:26.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:26.883 10:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:28.804 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.804 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:28.804 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:28.805 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:28.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:28.805 10:32:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:29.066 10:32:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:30.971 10:32:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:36.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:19:36.255 00:19:36.255 --- 10.0.0.2 ping statistics --- 00:19:36.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.255 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:19:36.255 00:19:36.255 --- 10.0.0.1 ping statistics --- 00:19:36.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.255 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.255 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.256 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2347176 00:19:36.256 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:36.256 10:32:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2347176 00:19:36.256 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2347176 ']' 00:19:36.256 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.256 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.256 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.256 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.256 10:32:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.256 [2024-07-15 10:32:30.816272] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:36.256 [2024-07-15 10:32:30.816361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.256 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.256 [2024-07-15 10:32:30.896033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:36.513 [2024-07-15 10:32:31.033902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.513 [2024-07-15 10:32:31.033969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.513 [2024-07-15 10:32:31.033996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.513 [2024-07-15 10:32:31.034017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.513 [2024-07-15 10:32:31.034034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.513 [2024-07-15 10:32:31.034107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.513 [2024-07-15 10:32:31.034167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.513 [2024-07-15 10:32:31.034247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.513 [2024-07-15 10:32:31.034256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.513 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.772 [2024-07-15 10:32:31.304869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.772 Malloc1 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.772 [2024-07-15 10:32:31.357980] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2347319 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:36.772 10:32:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:36.772 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.304 10:32:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:39.304 10:32:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.304 10:32:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:39.304 10:32:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.304 10:32:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:39.304 "tick_rate": 2700000000, 00:19:39.304 "poll_groups": [ 00:19:39.304 { 00:19:39.304 "name": "nvmf_tgt_poll_group_000", 00:19:39.304 "admin_qpairs": 1, 00:19:39.304 "io_qpairs": 1, 00:19:39.304 "current_admin_qpairs": 1, 00:19:39.304 "current_io_qpairs": 1, 00:19:39.304 "pending_bdev_io": 0, 00:19:39.304 "completed_nvme_io": 20754, 00:19:39.304 "transports": [ 00:19:39.304 { 00:19:39.304 "trtype": "TCP" 00:19:39.304 } 00:19:39.304 ] 00:19:39.304 }, 00:19:39.304 { 00:19:39.304 "name": "nvmf_tgt_poll_group_001", 00:19:39.304 "admin_qpairs": 0, 00:19:39.304 "io_qpairs": 1, 00:19:39.304 "current_admin_qpairs": 0, 00:19:39.304 "current_io_qpairs": 1, 00:19:39.304 "pending_bdev_io": 0, 00:19:39.304 "completed_nvme_io": 20391, 00:19:39.304 "transports": [ 00:19:39.304 { 00:19:39.304 "trtype": "TCP" 00:19:39.304 } 00:19:39.304 ] 00:19:39.304 }, 00:19:39.304 { 00:19:39.304 "name": "nvmf_tgt_poll_group_002", 00:19:39.304 "admin_qpairs": 0, 00:19:39.304 "io_qpairs": 1, 00:19:39.304 "current_admin_qpairs": 0, 00:19:39.304 "current_io_qpairs": 1, 00:19:39.304 "pending_bdev_io": 0, 00:19:39.305 "completed_nvme_io": 21517, 00:19:39.305 "transports": [ 00:19:39.305 { 00:19:39.305 "trtype": "TCP" 00:19:39.305 } 00:19:39.305 ] 00:19:39.305 }, 00:19:39.305 { 00:19:39.305 "name": "nvmf_tgt_poll_group_003", 00:19:39.305 "admin_qpairs": 0, 00:19:39.305 "io_qpairs": 1, 00:19:39.305 "current_admin_qpairs": 0, 00:19:39.305 "current_io_qpairs": 1, 00:19:39.305 "pending_bdev_io": 0, 00:19:39.305 "completed_nvme_io": 20865, 00:19:39.305 "transports": [ 00:19:39.305 { 00:19:39.305 "trtype": "TCP" 00:19:39.305 } 00:19:39.305 ] 00:19:39.305 } 00:19:39.305 ] 00:19:39.305 }' 00:19:39.305 10:32:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:39.305 10:32:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:39.305 10:32:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:39.305 10:32:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:39.305 10:32:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2347319 00:19:47.415 Initializing NVMe Controllers 00:19:47.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:47.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:47.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:47.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:47.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:47.415 Initialization complete. Launching workers. 00:19:47.415 ======================================================== 00:19:47.415 Latency(us) 00:19:47.415 Device Information : IOPS MiB/s Average min max 00:19:47.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11272.89 44.03 5679.20 2547.82 7572.59 00:19:47.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10661.34 41.65 6005.26 3637.52 7915.62 00:19:47.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10916.42 42.64 5863.51 2144.34 8110.71 00:19:47.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10909.92 42.62 5867.44 3171.60 7700.62 00:19:47.415 ======================================================== 00:19:47.415 Total : 43760.56 170.94 5851.54 2144.34 8110.71 00:19:47.415 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.415 rmmod nvme_tcp 00:19:47.415 rmmod nvme_fabrics 00:19:47.415 rmmod nvme_keyring 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2347176 ']' 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2347176 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2347176 ']' 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2347176 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2347176 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2347176' 00:19:47.415 killing process with pid 2347176 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2347176 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2347176 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.415 10:32:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.318 10:32:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:49.318 10:32:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:49.318 10:32:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:49.933 10:32:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:52.468 10:32:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:57.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:57.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:57.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:57.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.747 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:57.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:19:57.748 00:19:57.748 --- 10.0.0.2 ping statistics --- 00:19:57.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.748 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:19:57.748 00:19:57.748 --- 10.0.0.1 ping statistics --- 00:19:57.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.748 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:57.748 net.core.busy_poll = 1 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:57.748 net.core.busy_read = 1 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2349936 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2349936 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2349936 ']' 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.748 10:32:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:57.748 [2024-07-15 10:32:51.875250] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:57.748 [2024-07-15 10:32:51.875324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.748 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.748 [2024-07-15 10:32:51.941042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.748 [2024-07-15 10:32:52.057832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.748 [2024-07-15 10:32:52.057925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.748 [2024-07-15 10:32:52.057947] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.748 [2024-07-15 10:32:52.057961] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.748 [2024-07-15 10:32:52.057973] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.748 [2024-07-15 10:32:52.058038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.748 [2024-07-15 10:32:52.058093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.748 [2024-07-15 10:32:52.058370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.748 [2024-07-15 10:32:52.058373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.315 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.573 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.573 10:32:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:58.573 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.573 10:32:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.573 [2024-07-15 10:32:52.997545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.573 Malloc1 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.573 [2024-07-15 10:32:53.048660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2350102 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:58.573 10:32:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:58.573 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.485 10:32:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:00.485 10:32:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.485 10:32:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.485 10:32:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.485 10:32:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:00.485 "tick_rate": 2700000000, 00:20:00.485 "poll_groups": [ 00:20:00.485 { 00:20:00.485 "name": "nvmf_tgt_poll_group_000", 00:20:00.485 "admin_qpairs": 1, 00:20:00.485 "io_qpairs": 2, 00:20:00.485 "current_admin_qpairs": 1, 00:20:00.485 "current_io_qpairs": 2, 00:20:00.485 "pending_bdev_io": 0, 00:20:00.485 "completed_nvme_io": 25558, 00:20:00.485 "transports": [ 00:20:00.485 { 00:20:00.485 "trtype": "TCP" 00:20:00.485 } 00:20:00.485 ] 00:20:00.485 }, 00:20:00.485 { 00:20:00.485 "name": "nvmf_tgt_poll_group_001", 00:20:00.485 "admin_qpairs": 0, 00:20:00.485 "io_qpairs": 2, 00:20:00.485 "current_admin_qpairs": 0, 00:20:00.485 "current_io_qpairs": 2, 00:20:00.485 "pending_bdev_io": 0, 00:20:00.485 "completed_nvme_io": 25889, 00:20:00.485 "transports": [ 00:20:00.485 { 00:20:00.485 "trtype": "TCP" 00:20:00.485 } 00:20:00.485 ] 00:20:00.485 }, 00:20:00.485 { 00:20:00.485 "name": "nvmf_tgt_poll_group_002", 00:20:00.485 "admin_qpairs": 0, 00:20:00.485 "io_qpairs": 0, 00:20:00.485 "current_admin_qpairs": 0, 00:20:00.485 "current_io_qpairs": 0, 00:20:00.485 "pending_bdev_io": 0, 00:20:00.485 "completed_nvme_io": 0, 00:20:00.485 "transports": [ 00:20:00.485 { 00:20:00.485 "trtype": "TCP" 00:20:00.485 } 00:20:00.485 ] 00:20:00.485 }, 00:20:00.485 { 00:20:00.485 "name": "nvmf_tgt_poll_group_003", 00:20:00.485 "admin_qpairs": 0, 00:20:00.485 "io_qpairs": 0, 00:20:00.485 "current_admin_qpairs": 0, 00:20:00.485 "current_io_qpairs": 0, 00:20:00.485 "pending_bdev_io": 0, 00:20:00.485 "completed_nvme_io": 0, 00:20:00.485 "transports": [ 00:20:00.485 { 00:20:00.485 "trtype": "TCP" 00:20:00.485 } 00:20:00.485 ] 00:20:00.485 } 00:20:00.485 ] 00:20:00.485 }' 00:20:00.485 10:32:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:00.485 10:32:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:00.485 10:32:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:00.485 10:32:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:00.485 10:32:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2350102 00:20:08.601 Initializing NVMe Controllers 00:20:08.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:08.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:08.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:08.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:08.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:08.601 Initialization complete. Launching workers. 00:20:08.601 ======================================================== 00:20:08.601 Latency(us) 00:20:08.601 Device Information : IOPS MiB/s Average min max 00:20:08.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6633.20 25.91 9650.47 1808.12 55254.73 00:20:08.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5984.10 23.38 10721.93 1874.72 55461.07 00:20:08.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6779.60 26.48 9481.00 1547.62 54606.70 00:20:08.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7508.50 29.33 8549.38 1666.20 54017.36 00:20:08.601 ======================================================== 00:20:08.601 Total : 26905.40 105.10 9538.79 1547.62 55461.07 00:20:08.601 00:20:08.601 10:33:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:08.601 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:08.602 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:08.602 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.602 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:08.602 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.602 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.602 rmmod nvme_tcp 00:20:08.858 rmmod nvme_fabrics 00:20:08.858 rmmod nvme_keyring 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2349936 ']' 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2349936 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2349936 ']' 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2349936 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2349936 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2349936' 00:20:08.858 killing process with pid 2349936 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2349936 00:20:08.858 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2349936 00:20:09.167 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:09.167 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:09.167 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:09.167 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.167 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.167 10:33:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.168 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.168 10:33:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.459 10:33:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.459 10:33:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:12.459 00:20:12.459 real 0m45.566s 00:20:12.459 user 2m41.652s 00:20:12.459 sys 0m10.082s 00:20:12.459 10:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.459 10:33:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.459 ************************************ 00:20:12.459 END TEST nvmf_perf_adq 00:20:12.459 ************************************ 00:20:12.459 10:33:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:12.459 10:33:06 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:12.459 10:33:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:12.459 10:33:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.459 10:33:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:12.459 ************************************ 00:20:12.459 START TEST nvmf_shutdown 00:20:12.459 ************************************ 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:12.459 * Looking for test storage... 00:20:12.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.459 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:12.460 ************************************ 00:20:12.460 START TEST nvmf_shutdown_tc1 00:20:12.460 ************************************ 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.460 10:33:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:14.362 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:14.362 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:14.362 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:14.362 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:20:14.362 00:20:14.362 --- 10.0.0.2 ping statistics --- 00:20:14.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.362 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:20:14.362 00:20:14.362 --- 10.0.0.1 ping statistics --- 00:20:14.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.362 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.362 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2353974 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2353974 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2353974 ']' 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.363 10:33:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:14.363 [2024-07-15 10:33:08.914482] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:14.363 [2024-07-15 10:33:08.914561] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.363 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.363 [2024-07-15 10:33:08.980323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.621 [2024-07-15 10:33:09.091579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.621 [2024-07-15 10:33:09.091628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.621 [2024-07-15 10:33:09.091642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.621 [2024-07-15 10:33:09.091653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.621 [2024-07-15 10:33:09.091662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.621 [2024-07-15 10:33:09.092181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.621 [2024-07-15 10:33:09.092304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.621 [2024-07-15 10:33:09.092468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:14.621 [2024-07-15 10:33:09.092473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:14.621 [2024-07-15 10:33:09.231554] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.621 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:14.887 Malloc1 00:20:14.887 [2024-07-15 10:33:09.307246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.887 Malloc2 00:20:14.887 Malloc3 00:20:14.887 Malloc4 00:20:14.887 Malloc5 00:20:14.887 Malloc6 00:20:15.238 Malloc7 00:20:15.238 Malloc8 00:20:15.238 Malloc9 00:20:15.238 Malloc10 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2354179 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2354179 /var/tmp/bdevperf.sock 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2354179 ']' 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:15.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.238 { 00:20:15.238 "params": { 00:20:15.238 "name": "Nvme$subsystem", 00:20:15.238 "trtype": "$TEST_TRANSPORT", 00:20:15.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.238 "adrfam": "ipv4", 00:20:15.238 "trsvcid": "$NVMF_PORT", 00:20:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.238 "hdgst": ${hdgst:-false}, 00:20:15.238 "ddgst": ${ddgst:-false} 00:20:15.238 }, 00:20:15.238 "method": "bdev_nvme_attach_controller" 00:20:15.238 } 00:20:15.238 EOF 00:20:15.238 )") 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.238 { 00:20:15.238 "params": { 00:20:15.238 "name": "Nvme$subsystem", 00:20:15.238 "trtype": "$TEST_TRANSPORT", 00:20:15.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.238 "adrfam": "ipv4", 00:20:15.238 "trsvcid": "$NVMF_PORT", 00:20:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.238 "hdgst": ${hdgst:-false}, 00:20:15.238 "ddgst": ${ddgst:-false} 00:20:15.238 }, 00:20:15.238 "method": "bdev_nvme_attach_controller" 00:20:15.238 } 00:20:15.238 EOF 00:20:15.238 )") 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.238 { 00:20:15.238 "params": { 00:20:15.238 "name": "Nvme$subsystem", 00:20:15.238 "trtype": "$TEST_TRANSPORT", 00:20:15.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.238 "adrfam": "ipv4", 00:20:15.238 "trsvcid": "$NVMF_PORT", 00:20:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.238 "hdgst": ${hdgst:-false}, 00:20:15.238 "ddgst": ${ddgst:-false} 00:20:15.238 }, 00:20:15.238 "method": "bdev_nvme_attach_controller" 00:20:15.238 } 00:20:15.238 EOF 00:20:15.238 )") 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.238 { 00:20:15.238 "params": { 00:20:15.238 "name": "Nvme$subsystem", 00:20:15.238 "trtype": "$TEST_TRANSPORT", 00:20:15.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.238 "adrfam": "ipv4", 00:20:15.238 "trsvcid": "$NVMF_PORT", 00:20:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.238 "hdgst": ${hdgst:-false}, 00:20:15.238 "ddgst": ${ddgst:-false} 00:20:15.238 }, 00:20:15.238 "method": "bdev_nvme_attach_controller" 00:20:15.238 } 00:20:15.238 EOF 00:20:15.238 )") 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.238 { 00:20:15.238 "params": { 00:20:15.238 "name": "Nvme$subsystem", 00:20:15.238 "trtype": "$TEST_TRANSPORT", 00:20:15.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.238 "adrfam": "ipv4", 00:20:15.238 "trsvcid": "$NVMF_PORT", 00:20:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.238 "hdgst": ${hdgst:-false}, 00:20:15.238 "ddgst": ${ddgst:-false} 00:20:15.238 }, 00:20:15.238 "method": "bdev_nvme_attach_controller" 00:20:15.238 } 00:20:15.238 EOF 00:20:15.238 )") 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.238 { 00:20:15.238 "params": { 00:20:15.238 "name": "Nvme$subsystem", 00:20:15.238 "trtype": "$TEST_TRANSPORT", 00:20:15.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.238 "adrfam": "ipv4", 00:20:15.238 "trsvcid": "$NVMF_PORT", 00:20:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.238 "hdgst": ${hdgst:-false}, 00:20:15.238 "ddgst": ${ddgst:-false} 00:20:15.238 }, 00:20:15.238 "method": "bdev_nvme_attach_controller" 00:20:15.238 } 00:20:15.238 EOF 00:20:15.238 )") 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.238 { 00:20:15.238 "params": { 00:20:15.238 "name": "Nvme$subsystem", 00:20:15.238 "trtype": "$TEST_TRANSPORT", 00:20:15.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.238 "adrfam": "ipv4", 00:20:15.238 "trsvcid": "$NVMF_PORT", 00:20:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.238 "hdgst": ${hdgst:-false}, 00:20:15.238 "ddgst": ${ddgst:-false} 00:20:15.238 }, 00:20:15.238 "method": "bdev_nvme_attach_controller" 00:20:15.238 } 00:20:15.238 EOF 00:20:15.238 )") 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.238 { 00:20:15.238 "params": { 00:20:15.238 "name": "Nvme$subsystem", 00:20:15.238 "trtype": "$TEST_TRANSPORT", 00:20:15.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.238 "adrfam": "ipv4", 00:20:15.238 "trsvcid": "$NVMF_PORT", 00:20:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.238 "hdgst": ${hdgst:-false}, 00:20:15.238 "ddgst": ${ddgst:-false} 00:20:15.238 }, 00:20:15.238 "method": "bdev_nvme_attach_controller" 00:20:15.238 } 00:20:15.238 EOF 00:20:15.238 )") 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.238 { 00:20:15.238 "params": { 00:20:15.238 "name": "Nvme$subsystem", 00:20:15.238 "trtype": "$TEST_TRANSPORT", 00:20:15.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.238 "adrfam": "ipv4", 00:20:15.238 "trsvcid": "$NVMF_PORT", 00:20:15.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.238 "hdgst": ${hdgst:-false}, 00:20:15.238 "ddgst": ${ddgst:-false} 00:20:15.238 }, 00:20:15.238 "method": "bdev_nvme_attach_controller" 00:20:15.238 } 00:20:15.238 EOF 00:20:15.238 )") 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.238 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.238 { 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme$subsystem", 00:20:15.239 "trtype": "$TEST_TRANSPORT", 00:20:15.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "$NVMF_PORT", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.239 "hdgst": ${hdgst:-false}, 00:20:15.239 "ddgst": ${ddgst:-false} 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 } 00:20:15.239 EOF 00:20:15.239 )") 00:20:15.239 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:15.239 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:15.239 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:15.239 10:33:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme1", 00:20:15.239 "trtype": "tcp", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "4420", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.239 "hdgst": false, 00:20:15.239 "ddgst": false 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 },{ 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme2", 00:20:15.239 "trtype": "tcp", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "4420", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:15.239 "hdgst": false, 00:20:15.239 "ddgst": false 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 },{ 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme3", 00:20:15.239 "trtype": "tcp", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "4420", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:15.239 "hdgst": false, 00:20:15.239 "ddgst": false 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 },{ 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme4", 00:20:15.239 "trtype": "tcp", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "4420", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:15.239 "hdgst": false, 00:20:15.239 "ddgst": false 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 },{ 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme5", 00:20:15.239 "trtype": "tcp", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "4420", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:15.239 "hdgst": false, 00:20:15.239 "ddgst": false 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 },{ 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme6", 00:20:15.239 "trtype": "tcp", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "4420", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:15.239 "hdgst": false, 00:20:15.239 "ddgst": false 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 },{ 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme7", 00:20:15.239 "trtype": "tcp", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "4420", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:15.239 "hdgst": false, 00:20:15.239 "ddgst": false 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 },{ 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme8", 00:20:15.239 "trtype": "tcp", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "4420", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:15.239 "hdgst": false, 00:20:15.239 "ddgst": false 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 },{ 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme9", 00:20:15.239 "trtype": "tcp", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "4420", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:15.239 "hdgst": false, 00:20:15.239 "ddgst": false 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 },{ 00:20:15.239 "params": { 00:20:15.239 "name": "Nvme10", 00:20:15.239 "trtype": "tcp", 00:20:15.239 "traddr": "10.0.0.2", 00:20:15.239 "adrfam": "ipv4", 00:20:15.239 "trsvcid": "4420", 00:20:15.239 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:15.239 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:15.239 "hdgst": false, 00:20:15.239 "ddgst": false 00:20:15.239 }, 00:20:15.239 "method": "bdev_nvme_attach_controller" 00:20:15.239 }' 00:20:15.239 [2024-07-15 10:33:09.825525] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:15.239 [2024-07-15 10:33:09.825598] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:15.239 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.497 [2024-07-15 10:33:09.889904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.497 [2024-07-15 10:33:09.999376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.394 10:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.394 10:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:17.394 10:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:17.394 10:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.394 10:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:17.394 10:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.394 10:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2354179 00:20:17.394 10:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:17.394 10:33:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:18.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2354179 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2353974 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.329 { 00:20:18.329 "params": { 00:20:18.329 "name": "Nvme$subsystem", 00:20:18.329 "trtype": "$TEST_TRANSPORT", 00:20:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.329 "adrfam": "ipv4", 00:20:18.329 "trsvcid": "$NVMF_PORT", 00:20:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.329 "hdgst": ${hdgst:-false}, 00:20:18.329 "ddgst": ${ddgst:-false} 00:20:18.329 }, 00:20:18.329 "method": "bdev_nvme_attach_controller" 00:20:18.329 } 00:20:18.329 EOF 00:20:18.329 )") 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.329 { 00:20:18.329 "params": { 00:20:18.329 "name": "Nvme$subsystem", 00:20:18.329 "trtype": "$TEST_TRANSPORT", 00:20:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.329 "adrfam": "ipv4", 00:20:18.329 "trsvcid": "$NVMF_PORT", 00:20:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.329 "hdgst": ${hdgst:-false}, 00:20:18.329 "ddgst": ${ddgst:-false} 00:20:18.329 }, 00:20:18.329 "method": "bdev_nvme_attach_controller" 00:20:18.329 } 00:20:18.329 EOF 00:20:18.329 )") 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.329 { 00:20:18.329 "params": { 00:20:18.329 "name": "Nvme$subsystem", 00:20:18.329 "trtype": "$TEST_TRANSPORT", 00:20:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.329 "adrfam": "ipv4", 00:20:18.329 "trsvcid": "$NVMF_PORT", 00:20:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.329 "hdgst": ${hdgst:-false}, 00:20:18.329 "ddgst": ${ddgst:-false} 00:20:18.329 }, 00:20:18.329 "method": "bdev_nvme_attach_controller" 00:20:18.329 } 00:20:18.329 EOF 00:20:18.329 )") 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.329 { 00:20:18.329 "params": { 00:20:18.329 "name": "Nvme$subsystem", 00:20:18.329 "trtype": "$TEST_TRANSPORT", 00:20:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.329 "adrfam": "ipv4", 00:20:18.329 "trsvcid": "$NVMF_PORT", 00:20:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.329 "hdgst": ${hdgst:-false}, 00:20:18.329 "ddgst": ${ddgst:-false} 00:20:18.329 }, 00:20:18.329 "method": "bdev_nvme_attach_controller" 00:20:18.329 } 00:20:18.329 EOF 00:20:18.329 )") 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.329 { 00:20:18.329 "params": { 00:20:18.329 "name": "Nvme$subsystem", 00:20:18.329 "trtype": "$TEST_TRANSPORT", 00:20:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.329 "adrfam": "ipv4", 00:20:18.329 "trsvcid": "$NVMF_PORT", 00:20:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.329 "hdgst": ${hdgst:-false}, 00:20:18.329 "ddgst": ${ddgst:-false} 00:20:18.329 }, 00:20:18.329 "method": "bdev_nvme_attach_controller" 00:20:18.329 } 00:20:18.329 EOF 00:20:18.329 )") 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.329 { 00:20:18.329 "params": { 00:20:18.329 "name": "Nvme$subsystem", 00:20:18.329 "trtype": "$TEST_TRANSPORT", 00:20:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.329 "adrfam": "ipv4", 00:20:18.329 "trsvcid": "$NVMF_PORT", 00:20:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.329 "hdgst": ${hdgst:-false}, 00:20:18.329 "ddgst": ${ddgst:-false} 00:20:18.329 }, 00:20:18.329 "method": "bdev_nvme_attach_controller" 00:20:18.329 } 00:20:18.329 EOF 00:20:18.329 )") 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.329 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.329 { 00:20:18.329 "params": { 00:20:18.329 "name": "Nvme$subsystem", 00:20:18.329 "trtype": "$TEST_TRANSPORT", 00:20:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.329 "adrfam": "ipv4", 00:20:18.329 "trsvcid": "$NVMF_PORT", 00:20:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.329 "hdgst": ${hdgst:-false}, 00:20:18.329 "ddgst": ${ddgst:-false} 00:20:18.329 }, 00:20:18.329 "method": "bdev_nvme_attach_controller" 00:20:18.329 } 00:20:18.329 EOF 00:20:18.329 )") 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.330 { 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme$subsystem", 00:20:18.330 "trtype": "$TEST_TRANSPORT", 00:20:18.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "$NVMF_PORT", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.330 "hdgst": ${hdgst:-false}, 00:20:18.330 "ddgst": ${ddgst:-false} 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 } 00:20:18.330 EOF 00:20:18.330 )") 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.330 { 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme$subsystem", 00:20:18.330 "trtype": "$TEST_TRANSPORT", 00:20:18.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "$NVMF_PORT", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.330 "hdgst": ${hdgst:-false}, 00:20:18.330 "ddgst": ${ddgst:-false} 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 } 00:20:18.330 EOF 00:20:18.330 )") 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.330 { 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme$subsystem", 00:20:18.330 "trtype": "$TEST_TRANSPORT", 00:20:18.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "$NVMF_PORT", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.330 "hdgst": ${hdgst:-false}, 00:20:18.330 "ddgst": ${ddgst:-false} 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 } 00:20:18.330 EOF 00:20:18.330 )") 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:18.330 10:33:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme1", 00:20:18.330 "trtype": "tcp", 00:20:18.330 "traddr": "10.0.0.2", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "4420", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.330 "hdgst": false, 00:20:18.330 "ddgst": false 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 },{ 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme2", 00:20:18.330 "trtype": "tcp", 00:20:18.330 "traddr": "10.0.0.2", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "4420", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:18.330 "hdgst": false, 00:20:18.330 "ddgst": false 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 },{ 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme3", 00:20:18.330 "trtype": "tcp", 00:20:18.330 "traddr": "10.0.0.2", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "4420", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:18.330 "hdgst": false, 00:20:18.330 "ddgst": false 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 },{ 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme4", 00:20:18.330 "trtype": "tcp", 00:20:18.330 "traddr": "10.0.0.2", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "4420", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:18.330 "hdgst": false, 00:20:18.330 "ddgst": false 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 },{ 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme5", 00:20:18.330 "trtype": "tcp", 00:20:18.330 "traddr": "10.0.0.2", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "4420", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:18.330 "hdgst": false, 00:20:18.330 "ddgst": false 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 },{ 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme6", 00:20:18.330 "trtype": "tcp", 00:20:18.330 "traddr": "10.0.0.2", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "4420", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:18.330 "hdgst": false, 00:20:18.330 "ddgst": false 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 },{ 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme7", 00:20:18.330 "trtype": "tcp", 00:20:18.330 "traddr": "10.0.0.2", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "4420", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:18.330 "hdgst": false, 00:20:18.330 "ddgst": false 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 },{ 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme8", 00:20:18.330 "trtype": "tcp", 00:20:18.330 "traddr": "10.0.0.2", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "4420", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:18.330 "hdgst": false, 00:20:18.330 "ddgst": false 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 },{ 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme9", 00:20:18.330 "trtype": "tcp", 00:20:18.330 "traddr": "10.0.0.2", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "4420", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:18.330 "hdgst": false, 00:20:18.330 "ddgst": false 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 },{ 00:20:18.330 "params": { 00:20:18.330 "name": "Nvme10", 00:20:18.330 "trtype": "tcp", 00:20:18.330 "traddr": "10.0.0.2", 00:20:18.330 "adrfam": "ipv4", 00:20:18.330 "trsvcid": "4420", 00:20:18.330 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:18.330 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:18.330 "hdgst": false, 00:20:18.330 "ddgst": false 00:20:18.330 }, 00:20:18.330 "method": "bdev_nvme_attach_controller" 00:20:18.330 }' 00:20:18.330 [2024-07-15 10:33:12.881695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:18.330 [2024-07-15 10:33:12.881784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354488 ] 00:20:18.330 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.330 [2024-07-15 10:33:12.948967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.589 [2024-07-15 10:33:13.062236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.958 Running I/O for 1 seconds... 00:20:21.330 00:20:21.330 Latency(us) 00:20:21.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.330 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.330 Verification LBA range: start 0x0 length 0x400 00:20:21.330 Nvme1n1 : 1.13 226.82 14.18 0.00 0.00 279419.07 18738.44 257872.02 00:20:21.330 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.330 Verification LBA range: start 0x0 length 0x400 00:20:21.330 Nvme2n1 : 1.14 223.83 13.99 0.00 0.00 277057.42 19223.89 260978.92 00:20:21.330 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.330 Verification LBA range: start 0x0 length 0x400 00:20:21.330 Nvme3n1 : 1.12 232.15 14.51 0.00 0.00 262471.77 5291.43 259425.47 00:20:21.330 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.330 Verification LBA range: start 0x0 length 0x400 00:20:21.330 Nvme4n1 : 1.12 231.80 14.49 0.00 0.00 258379.32 6796.33 254765.13 00:20:21.330 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.330 Verification LBA range: start 0x0 length 0x400 00:20:21.330 Nvme5n1 : 1.15 222.63 13.91 0.00 0.00 266488.04 18155.90 264085.81 00:20:21.330 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.330 Verification LBA range: start 0x0 length 0x400 00:20:21.330 Nvme6n1 : 1.18 271.29 16.96 0.00 0.00 215387.70 20194.80 254765.13 00:20:21.330 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.330 Verification LBA range: start 0x0 length 0x400 00:20:21.330 Nvme7n1 : 1.13 226.28 14.14 0.00 0.00 252966.68 33787.45 242337.56 00:20:21.330 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.330 Verification LBA range: start 0x0 length 0x400 00:20:21.330 Nvme8n1 : 1.14 224.39 14.02 0.00 0.00 250927.41 23592.96 253211.69 00:20:21.330 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.330 Verification LBA range: start 0x0 length 0x400 00:20:21.330 Nvme9n1 : 1.17 218.20 13.64 0.00 0.00 254397.82 21748.24 268746.15 00:20:21.330 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.330 Verification LBA range: start 0x0 length 0x400 00:20:21.330 Nvme10n1 : 1.19 268.17 16.76 0.00 0.00 203852.99 6893.42 288940.94 00:20:21.330 =================================================================================================================== 00:20:21.330 Total : 2345.56 146.60 0.00 0.00 250140.93 5291.43 288940.94 00:20:21.587 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:21.588 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:21.588 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:21.588 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:21.588 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:21.588 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.588 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:21.588 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.588 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:21.588 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.588 10:33:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.588 rmmod nvme_tcp 00:20:21.588 rmmod nvme_fabrics 00:20:21.588 rmmod nvme_keyring 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2353974 ']' 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2353974 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2353974 ']' 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2353974 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2353974 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2353974' 00:20:21.588 killing process with pid 2353974 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2353974 00:20:21.588 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2353974 00:20:22.155 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.155 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.155 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.155 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.155 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.155 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.155 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.155 10:33:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.058 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.058 00:20:24.058 real 0m11.882s 00:20:24.058 user 0m34.752s 00:20:24.058 sys 0m3.142s 00:20:24.058 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:24.058 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:24.058 ************************************ 00:20:24.058 END TEST nvmf_shutdown_tc1 00:20:24.058 ************************************ 00:20:24.058 10:33:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:24.058 10:33:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:24.058 10:33:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:24.058 10:33:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.058 10:33:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:24.317 ************************************ 00:20:24.317 START TEST nvmf_shutdown_tc2 00:20:24.317 ************************************ 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:24.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.317 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:24.318 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:24.318 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:24.318 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:24.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:20:24.318 00:20:24.318 --- 10.0.0.2 ping statistics --- 00:20:24.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.318 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:20:24.318 00:20:24.318 --- 10.0.0.1 ping statistics --- 00:20:24.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.318 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2355368 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2355368 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2355368 ']' 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.318 10:33:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:24.318 [2024-07-15 10:33:18.947714] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:24.318 [2024-07-15 10:33:18.947784] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.576 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.576 [2024-07-15 10:33:19.016280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.576 [2024-07-15 10:33:19.133942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.576 [2024-07-15 10:33:19.133997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.576 [2024-07-15 10:33:19.134026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.576 [2024-07-15 10:33:19.134038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.576 [2024-07-15 10:33:19.134048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.576 [2024-07-15 10:33:19.134098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.576 [2024-07-15 10:33:19.134157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.576 [2024-07-15 10:33:19.134235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:24.576 [2024-07-15 10:33:19.134237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:24.835 [2024-07-15 10:33:19.272634] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.835 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:24.835 Malloc1 00:20:24.835 [2024-07-15 10:33:19.347654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.835 Malloc2 00:20:24.835 Malloc3 00:20:24.835 Malloc4 00:20:25.093 Malloc5 00:20:25.093 Malloc6 00:20:25.093 Malloc7 00:20:25.093 Malloc8 00:20:25.093 Malloc9 00:20:25.352 Malloc10 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2355546 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2355546 /var/tmp/bdevperf.sock 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2355546 ']' 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.352 { 00:20:25.352 "params": { 00:20:25.352 "name": "Nvme$subsystem", 00:20:25.352 "trtype": "$TEST_TRANSPORT", 00:20:25.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.352 "adrfam": "ipv4", 00:20:25.352 "trsvcid": "$NVMF_PORT", 00:20:25.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.352 "hdgst": ${hdgst:-false}, 00:20:25.352 "ddgst": ${ddgst:-false} 00:20:25.352 }, 00:20:25.352 "method": "bdev_nvme_attach_controller" 00:20:25.352 } 00:20:25.352 EOF 00:20:25.352 )") 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.352 { 00:20:25.352 "params": { 00:20:25.352 "name": "Nvme$subsystem", 00:20:25.352 "trtype": "$TEST_TRANSPORT", 00:20:25.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.352 "adrfam": "ipv4", 00:20:25.352 "trsvcid": "$NVMF_PORT", 00:20:25.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.352 "hdgst": ${hdgst:-false}, 00:20:25.352 "ddgst": ${ddgst:-false} 00:20:25.352 }, 00:20:25.352 "method": "bdev_nvme_attach_controller" 00:20:25.352 } 00:20:25.352 EOF 00:20:25.352 )") 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.352 { 00:20:25.352 "params": { 00:20:25.352 "name": "Nvme$subsystem", 00:20:25.352 "trtype": "$TEST_TRANSPORT", 00:20:25.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.352 "adrfam": "ipv4", 00:20:25.352 "trsvcid": "$NVMF_PORT", 00:20:25.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.352 "hdgst": ${hdgst:-false}, 00:20:25.352 "ddgst": ${ddgst:-false} 00:20:25.352 }, 00:20:25.352 "method": "bdev_nvme_attach_controller" 00:20:25.352 } 00:20:25.352 EOF 00:20:25.352 )") 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.352 { 00:20:25.352 "params": { 00:20:25.352 "name": "Nvme$subsystem", 00:20:25.352 "trtype": "$TEST_TRANSPORT", 00:20:25.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.352 "adrfam": "ipv4", 00:20:25.352 "trsvcid": "$NVMF_PORT", 00:20:25.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.352 "hdgst": ${hdgst:-false}, 00:20:25.352 "ddgst": ${ddgst:-false} 00:20:25.352 }, 00:20:25.352 "method": "bdev_nvme_attach_controller" 00:20:25.352 } 00:20:25.352 EOF 00:20:25.352 )") 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.352 { 00:20:25.352 "params": { 00:20:25.352 "name": "Nvme$subsystem", 00:20:25.352 "trtype": "$TEST_TRANSPORT", 00:20:25.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.352 "adrfam": "ipv4", 00:20:25.352 "trsvcid": "$NVMF_PORT", 00:20:25.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.352 "hdgst": ${hdgst:-false}, 00:20:25.352 "ddgst": ${ddgst:-false} 00:20:25.352 }, 00:20:25.352 "method": "bdev_nvme_attach_controller" 00:20:25.352 } 00:20:25.352 EOF 00:20:25.352 )") 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.352 { 00:20:25.352 "params": { 00:20:25.352 "name": "Nvme$subsystem", 00:20:25.352 "trtype": "$TEST_TRANSPORT", 00:20:25.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.352 "adrfam": "ipv4", 00:20:25.352 "trsvcid": "$NVMF_PORT", 00:20:25.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.352 "hdgst": ${hdgst:-false}, 00:20:25.352 "ddgst": ${ddgst:-false} 00:20:25.352 }, 00:20:25.352 "method": "bdev_nvme_attach_controller" 00:20:25.352 } 00:20:25.352 EOF 00:20:25.352 )") 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.352 { 00:20:25.352 "params": { 00:20:25.352 "name": "Nvme$subsystem", 00:20:25.352 "trtype": "$TEST_TRANSPORT", 00:20:25.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.352 "adrfam": "ipv4", 00:20:25.352 "trsvcid": "$NVMF_PORT", 00:20:25.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.352 "hdgst": ${hdgst:-false}, 00:20:25.352 "ddgst": ${ddgst:-false} 00:20:25.352 }, 00:20:25.352 "method": "bdev_nvme_attach_controller" 00:20:25.352 } 00:20:25.352 EOF 00:20:25.352 )") 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.352 { 00:20:25.352 "params": { 00:20:25.352 "name": "Nvme$subsystem", 00:20:25.352 "trtype": "$TEST_TRANSPORT", 00:20:25.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.352 "adrfam": "ipv4", 00:20:25.352 "trsvcid": "$NVMF_PORT", 00:20:25.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.352 "hdgst": ${hdgst:-false}, 00:20:25.352 "ddgst": ${ddgst:-false} 00:20:25.352 }, 00:20:25.352 "method": "bdev_nvme_attach_controller" 00:20:25.352 } 00:20:25.352 EOF 00:20:25.352 )") 00:20:25.352 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:25.353 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.353 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.353 { 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme$subsystem", 00:20:25.353 "trtype": "$TEST_TRANSPORT", 00:20:25.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "$NVMF_PORT", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.353 "hdgst": ${hdgst:-false}, 00:20:25.353 "ddgst": ${ddgst:-false} 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 } 00:20:25.353 EOF 00:20:25.353 )") 00:20:25.353 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:25.353 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.353 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.353 { 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme$subsystem", 00:20:25.353 "trtype": "$TEST_TRANSPORT", 00:20:25.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "$NVMF_PORT", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.353 "hdgst": ${hdgst:-false}, 00:20:25.353 "ddgst": ${ddgst:-false} 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 } 00:20:25.353 EOF 00:20:25.353 )") 00:20:25.353 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:25.353 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:25.353 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:25.353 10:33:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme1", 00:20:25.353 "trtype": "tcp", 00:20:25.353 "traddr": "10.0.0.2", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "4420", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.353 "hdgst": false, 00:20:25.353 "ddgst": false 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 },{ 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme2", 00:20:25.353 "trtype": "tcp", 00:20:25.353 "traddr": "10.0.0.2", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "4420", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:25.353 "hdgst": false, 00:20:25.353 "ddgst": false 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 },{ 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme3", 00:20:25.353 "trtype": "tcp", 00:20:25.353 "traddr": "10.0.0.2", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "4420", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:25.353 "hdgst": false, 00:20:25.353 "ddgst": false 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 },{ 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme4", 00:20:25.353 "trtype": "tcp", 00:20:25.353 "traddr": "10.0.0.2", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "4420", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:25.353 "hdgst": false, 00:20:25.353 "ddgst": false 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 },{ 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme5", 00:20:25.353 "trtype": "tcp", 00:20:25.353 "traddr": "10.0.0.2", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "4420", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:25.353 "hdgst": false, 00:20:25.353 "ddgst": false 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 },{ 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme6", 00:20:25.353 "trtype": "tcp", 00:20:25.353 "traddr": "10.0.0.2", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "4420", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:25.353 "hdgst": false, 00:20:25.353 "ddgst": false 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 },{ 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme7", 00:20:25.353 "trtype": "tcp", 00:20:25.353 "traddr": "10.0.0.2", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "4420", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:25.353 "hdgst": false, 00:20:25.353 "ddgst": false 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 },{ 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme8", 00:20:25.353 "trtype": "tcp", 00:20:25.353 "traddr": "10.0.0.2", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "4420", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:25.353 "hdgst": false, 00:20:25.353 "ddgst": false 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 },{ 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme9", 00:20:25.353 "trtype": "tcp", 00:20:25.353 "traddr": "10.0.0.2", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "4420", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:25.353 "hdgst": false, 00:20:25.353 "ddgst": false 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 },{ 00:20:25.353 "params": { 00:20:25.353 "name": "Nvme10", 00:20:25.353 "trtype": "tcp", 00:20:25.353 "traddr": "10.0.0.2", 00:20:25.353 "adrfam": "ipv4", 00:20:25.353 "trsvcid": "4420", 00:20:25.353 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:25.353 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:25.353 "hdgst": false, 00:20:25.353 "ddgst": false 00:20:25.353 }, 00:20:25.353 "method": "bdev_nvme_attach_controller" 00:20:25.353 }' 00:20:25.353 [2024-07-15 10:33:19.860091] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:25.353 [2024-07-15 10:33:19.860187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355546 ] 00:20:25.353 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.353 [2024-07-15 10:33:19.923054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.611 [2024-07-15 10:33:20.037288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.984 Running I/O for 10 seconds... 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.242 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.500 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.500 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=84 00:20:27.500 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 84 -ge 100 ']' 00:20:27.500 10:33:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2355546 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2355546 ']' 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2355546 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2355546 00:20:27.758 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.759 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.759 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2355546' 00:20:27.759 killing process with pid 2355546 00:20:27.759 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2355546 00:20:27.759 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2355546 00:20:27.759 Received shutdown signal, test time was about 0.908009 seconds 00:20:27.759 00:20:27.759 Latency(us) 00:20:27.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.759 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.759 Verification LBA range: start 0x0 length 0x400 00:20:27.759 Nvme1n1 : 0.90 285.59 17.85 0.00 0.00 220961.75 31845.64 220589.32 00:20:27.759 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.759 Verification LBA range: start 0x0 length 0x400 00:20:27.759 Nvme2n1 : 0.89 214.90 13.43 0.00 0.00 288163.21 21748.24 257872.02 00:20:27.759 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.759 Verification LBA range: start 0x0 length 0x400 00:20:27.759 Nvme3n1 : 0.87 244.86 15.30 0.00 0.00 241851.66 17185.00 254765.13 00:20:27.759 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.759 Verification LBA range: start 0x0 length 0x400 00:20:27.759 Nvme4n1 : 0.89 293.38 18.34 0.00 0.00 200743.41 5655.51 236123.78 00:20:27.759 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.759 Verification LBA range: start 0x0 length 0x400 00:20:27.759 Nvme5n1 : 0.88 218.25 13.64 0.00 0.00 265363.66 21651.15 256318.58 00:20:27.759 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.759 Verification LBA range: start 0x0 length 0x400 00:20:27.759 Nvme6n1 : 0.90 213.19 13.32 0.00 0.00 266088.04 22816.24 271853.04 00:20:27.759 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.759 Verification LBA range: start 0x0 length 0x400 00:20:27.759 Nvme7n1 : 0.86 222.86 13.93 0.00 0.00 247279.31 23204.60 250104.79 00:20:27.759 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.759 Verification LBA range: start 0x0 length 0x400 00:20:27.759 Nvme8n1 : 0.87 221.70 13.86 0.00 0.00 242713.03 15922.82 257872.02 00:20:27.759 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.759 Verification LBA range: start 0x0 length 0x400 00:20:27.759 Nvme9n1 : 0.89 216.18 13.51 0.00 0.00 244212.62 29515.47 236123.78 00:20:27.759 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:27.759 Verification LBA range: start 0x0 length 0x400 00:20:27.759 Nvme10n1 : 0.91 209.43 13.09 0.00 0.00 246721.45 26020.22 299815.06 00:20:27.759 =================================================================================================================== 00:20:27.759 Total : 2340.35 146.27 0.00 0.00 244055.47 5655.51 299815.06 00:20:28.016 10:33:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2355368 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:28.945 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:29.202 rmmod nvme_tcp 00:20:29.202 rmmod nvme_fabrics 00:20:29.202 rmmod nvme_keyring 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2355368 ']' 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2355368 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2355368 ']' 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2355368 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2355368 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2355368' 00:20:29.202 killing process with pid 2355368 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2355368 00:20:29.202 10:33:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2355368 00:20:29.768 10:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.768 10:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.768 10:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.768 10:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.768 10:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.768 10:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.768 10:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.768 10:33:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:31.686 00:20:31.686 real 0m7.506s 00:20:31.686 user 0m22.216s 00:20:31.686 sys 0m1.435s 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:31.686 ************************************ 00:20:31.686 END TEST nvmf_shutdown_tc2 00:20:31.686 ************************************ 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:31.686 ************************************ 00:20:31.686 START TEST nvmf_shutdown_tc3 00:20:31.686 ************************************ 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:31.686 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:31.687 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:31.687 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:31.687 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:31.687 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:31.687 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:31.946 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:31.946 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:31.946 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:31.946 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:31.946 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:31.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:20:31.947 00:20:31.947 --- 10.0.0.2 ping statistics --- 00:20:31.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.947 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:31.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:20:31.947 00:20:31.947 --- 10.0.0.1 ping statistics --- 00:20:31.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.947 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2356338 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2356338 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2356338 ']' 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.947 10:33:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:31.947 [2024-07-15 10:33:26.504828] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:31.947 [2024-07-15 10:33:26.504936] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.947 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.947 [2024-07-15 10:33:26.578719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.247 [2024-07-15 10:33:26.697321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.247 [2024-07-15 10:33:26.697379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.247 [2024-07-15 10:33:26.697395] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.247 [2024-07-15 10:33:26.697408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.247 [2024-07-15 10:33:26.697420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.247 [2024-07-15 10:33:26.697497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.247 [2024-07-15 10:33:26.697610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.247 [2024-07-15 10:33:26.697677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:32.247 [2024-07-15 10:33:26.697680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:33.178 [2024-07-15 10:33:27.494968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.178 10:33:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:33.178 Malloc1 00:20:33.178 [2024-07-15 10:33:27.570079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.178 Malloc2 00:20:33.178 Malloc3 00:20:33.178 Malloc4 00:20:33.178 Malloc5 00:20:33.178 Malloc6 00:20:33.436 Malloc7 00:20:33.436 Malloc8 00:20:33.436 Malloc9 00:20:33.436 Malloc10 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2356640 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2356640 /var/tmp/bdevperf.sock 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2356640 ']' 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:33.436 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.437 { 00:20:33.437 "params": { 00:20:33.437 "name": "Nvme$subsystem", 00:20:33.437 "trtype": "$TEST_TRANSPORT", 00:20:33.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.437 "adrfam": "ipv4", 00:20:33.437 "trsvcid": "$NVMF_PORT", 00:20:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.437 "hdgst": ${hdgst:-false}, 00:20:33.437 "ddgst": ${ddgst:-false} 00:20:33.437 }, 00:20:33.437 "method": "bdev_nvme_attach_controller" 00:20:33.437 } 00:20:33.437 EOF 00:20:33.437 )") 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.437 { 00:20:33.437 "params": { 00:20:33.437 "name": "Nvme$subsystem", 00:20:33.437 "trtype": "$TEST_TRANSPORT", 00:20:33.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.437 "adrfam": "ipv4", 00:20:33.437 "trsvcid": "$NVMF_PORT", 00:20:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.437 "hdgst": ${hdgst:-false}, 00:20:33.437 "ddgst": ${ddgst:-false} 00:20:33.437 }, 00:20:33.437 "method": "bdev_nvme_attach_controller" 00:20:33.437 } 00:20:33.437 EOF 00:20:33.437 )") 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.437 { 00:20:33.437 "params": { 00:20:33.437 "name": "Nvme$subsystem", 00:20:33.437 "trtype": "$TEST_TRANSPORT", 00:20:33.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.437 "adrfam": "ipv4", 00:20:33.437 "trsvcid": "$NVMF_PORT", 00:20:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.437 "hdgst": ${hdgst:-false}, 00:20:33.437 "ddgst": ${ddgst:-false} 00:20:33.437 }, 00:20:33.437 "method": "bdev_nvme_attach_controller" 00:20:33.437 } 00:20:33.437 EOF 00:20:33.437 )") 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.437 { 00:20:33.437 "params": { 00:20:33.437 "name": "Nvme$subsystem", 00:20:33.437 "trtype": "$TEST_TRANSPORT", 00:20:33.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.437 "adrfam": "ipv4", 00:20:33.437 "trsvcid": "$NVMF_PORT", 00:20:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.437 "hdgst": ${hdgst:-false}, 00:20:33.437 "ddgst": ${ddgst:-false} 00:20:33.437 }, 00:20:33.437 "method": "bdev_nvme_attach_controller" 00:20:33.437 } 00:20:33.437 EOF 00:20:33.437 )") 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.437 { 00:20:33.437 "params": { 00:20:33.437 "name": "Nvme$subsystem", 00:20:33.437 "trtype": "$TEST_TRANSPORT", 00:20:33.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.437 "adrfam": "ipv4", 00:20:33.437 "trsvcid": "$NVMF_PORT", 00:20:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.437 "hdgst": ${hdgst:-false}, 00:20:33.437 "ddgst": ${ddgst:-false} 00:20:33.437 }, 00:20:33.437 "method": "bdev_nvme_attach_controller" 00:20:33.437 } 00:20:33.437 EOF 00:20:33.437 )") 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.437 { 00:20:33.437 "params": { 00:20:33.437 "name": "Nvme$subsystem", 00:20:33.437 "trtype": "$TEST_TRANSPORT", 00:20:33.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.437 "adrfam": "ipv4", 00:20:33.437 "trsvcid": "$NVMF_PORT", 00:20:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.437 "hdgst": ${hdgst:-false}, 00:20:33.437 "ddgst": ${ddgst:-false} 00:20:33.437 }, 00:20:33.437 "method": "bdev_nvme_attach_controller" 00:20:33.437 } 00:20:33.437 EOF 00:20:33.437 )") 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.437 { 00:20:33.437 "params": { 00:20:33.437 "name": "Nvme$subsystem", 00:20:33.437 "trtype": "$TEST_TRANSPORT", 00:20:33.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.437 "adrfam": "ipv4", 00:20:33.437 "trsvcid": "$NVMF_PORT", 00:20:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.437 "hdgst": ${hdgst:-false}, 00:20:33.437 "ddgst": ${ddgst:-false} 00:20:33.437 }, 00:20:33.437 "method": "bdev_nvme_attach_controller" 00:20:33.437 } 00:20:33.437 EOF 00:20:33.437 )") 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.437 { 00:20:33.437 "params": { 00:20:33.437 "name": "Nvme$subsystem", 00:20:33.437 "trtype": "$TEST_TRANSPORT", 00:20:33.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.437 "adrfam": "ipv4", 00:20:33.437 "trsvcid": "$NVMF_PORT", 00:20:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.437 "hdgst": ${hdgst:-false}, 00:20:33.437 "ddgst": ${ddgst:-false} 00:20:33.437 }, 00:20:33.437 "method": "bdev_nvme_attach_controller" 00:20:33.437 } 00:20:33.437 EOF 00:20:33.437 )") 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.437 { 00:20:33.437 "params": { 00:20:33.437 "name": "Nvme$subsystem", 00:20:33.437 "trtype": "$TEST_TRANSPORT", 00:20:33.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.437 "adrfam": "ipv4", 00:20:33.437 "trsvcid": "$NVMF_PORT", 00:20:33.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.437 "hdgst": ${hdgst:-false}, 00:20:33.437 "ddgst": ${ddgst:-false} 00:20:33.437 }, 00:20:33.437 "method": "bdev_nvme_attach_controller" 00:20:33.437 } 00:20:33.437 EOF 00:20:33.437 )") 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:33.437 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.438 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.438 { 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme$subsystem", 00:20:33.438 "trtype": "$TEST_TRANSPORT", 00:20:33.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "$NVMF_PORT", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.438 "hdgst": ${hdgst:-false}, 00:20:33.438 "ddgst": ${ddgst:-false} 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 } 00:20:33.438 EOF 00:20:33.438 )") 00:20:33.438 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:33.438 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:33.438 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:33.438 10:33:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme1", 00:20:33.438 "trtype": "tcp", 00:20:33.438 "traddr": "10.0.0.2", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "4420", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.438 "hdgst": false, 00:20:33.438 "ddgst": false 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 },{ 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme2", 00:20:33.438 "trtype": "tcp", 00:20:33.438 "traddr": "10.0.0.2", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "4420", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:33.438 "hdgst": false, 00:20:33.438 "ddgst": false 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 },{ 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme3", 00:20:33.438 "trtype": "tcp", 00:20:33.438 "traddr": "10.0.0.2", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "4420", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:33.438 "hdgst": false, 00:20:33.438 "ddgst": false 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 },{ 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme4", 00:20:33.438 "trtype": "tcp", 00:20:33.438 "traddr": "10.0.0.2", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "4420", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:33.438 "hdgst": false, 00:20:33.438 "ddgst": false 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 },{ 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme5", 00:20:33.438 "trtype": "tcp", 00:20:33.438 "traddr": "10.0.0.2", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "4420", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:33.438 "hdgst": false, 00:20:33.438 "ddgst": false 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 },{ 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme6", 00:20:33.438 "trtype": "tcp", 00:20:33.438 "traddr": "10.0.0.2", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "4420", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:33.438 "hdgst": false, 00:20:33.438 "ddgst": false 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 },{ 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme7", 00:20:33.438 "trtype": "tcp", 00:20:33.438 "traddr": "10.0.0.2", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "4420", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:33.438 "hdgst": false, 00:20:33.438 "ddgst": false 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 },{ 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme8", 00:20:33.438 "trtype": "tcp", 00:20:33.438 "traddr": "10.0.0.2", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "4420", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:33.438 "hdgst": false, 00:20:33.438 "ddgst": false 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 },{ 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme9", 00:20:33.438 "trtype": "tcp", 00:20:33.438 "traddr": "10.0.0.2", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "4420", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:33.438 "hdgst": false, 00:20:33.438 "ddgst": false 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 },{ 00:20:33.438 "params": { 00:20:33.438 "name": "Nvme10", 00:20:33.438 "trtype": "tcp", 00:20:33.438 "traddr": "10.0.0.2", 00:20:33.438 "adrfam": "ipv4", 00:20:33.438 "trsvcid": "4420", 00:20:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:33.438 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:33.438 "hdgst": false, 00:20:33.438 "ddgst": false 00:20:33.438 }, 00:20:33.438 "method": "bdev_nvme_attach_controller" 00:20:33.438 }' 00:20:33.696 [2024-07-15 10:33:28.086082] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:33.696 [2024-07-15 10:33:28.086176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356640 ] 00:20:33.696 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.696 [2024-07-15 10:33:28.148884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.696 [2024-07-15 10:33:28.258495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.220 Running I/O for 10 seconds... 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.220 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.478 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:36.478 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:36.478 10:33:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=150 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 150 -ge 100 ']' 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2356338 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2356338 ']' 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2356338 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2356338 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2356338' 00:20:36.752 killing process with pid 2356338 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2356338 00:20:36.752 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2356338 00:20:36.752 [2024-07-15 10:33:31.189901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.752 [2024-07-15 10:33:31.190295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.190768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64cba0 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.192997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.753 [2024-07-15 10:33:31.193009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64a640 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.193867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.754 [2024-07-15 10:33:31.193913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.193932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.754 [2024-07-15 10:33:31.193945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.193959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.754 [2024-07-15 10:33:31.193973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.193986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.754 [2024-07-15 10:33:31.193999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e4d30 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.194113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.754 [2024-07-15 10:33:31.194134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.754 [2024-07-15 10:33:31.194162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.754 [2024-07-15 10:33:31.194191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.754 [2024-07-15 10:33:31.194222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1056830 is same with the state(5) to be set 00:20:36.754 [2024-07-15 10:33:31.194692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.194717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.194759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.194789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.194818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.194846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.194894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.194924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.194952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.194981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.194996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.754 [2024-07-15 10:33:31.195346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.754 [2024-07-15 10:33:31.195359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t[2024-07-15 10:33:31.195606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12he state(5) to be set 00:20:36.755 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t[2024-07-15 10:33:31.195621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.755 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12[2024-07-15 10:33:31.195729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 he state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.195743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 he state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t[2024-07-15 10:33:31.195819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12he state(5) to be set 00:20:36.755 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t[2024-07-15 10:33:31.195870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.755 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:12[2024-07-15 10:33:31.195934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 he state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t[2024-07-15 10:33:31.195947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.755 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 [2024-07-15 10:33:31.195986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.195995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.755 [2024-07-15 10:33:31.195999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.755 [2024-07-15 10:33:31.196009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.196011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.755 he state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t[2024-07-15 10:33:31.196114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:12he state(5) to be set 00:20:36.756 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t[2024-07-15 10:33:31.196129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.756 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t[2024-07-15 10:33:31.196169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.756 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.196233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 he state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t[2024-07-15 10:33:31.196282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:12he state(5) to be set 00:20:36.756 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:12[2024-07-15 10:33:31.196375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64aae0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 he state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.196388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.756 [2024-07-15 10:33:31.196644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.756 [2024-07-15 10:33:31.196684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:36.756 [2024-07-15 10:33:31.196761] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11d6140 was disconnected and freed. reset controller. 00:20:36.756 [2024-07-15 10:33:31.199273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.199305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.199320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.199332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.756 [2024-07-15 10:33:31.199344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with t[2024-07-15 10:33:31.199681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:12he state(5) to be set 00:20:36.757 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.757 [2024-07-15 10:33:31.199705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.757 [2024-07-15 10:33:31.199718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.757 [2024-07-15 10:33:31.199743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.757 [2024-07-15 10:33:31.199756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.757 [2024-07-15 10:33:31.199771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.757 [2024-07-15 10:33:31.199784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:12[2024-07-15 10:33:31.199796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.757 he state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with t[2024-07-15 10:33:31.199810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.757 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.757 [2024-07-15 10:33:31.199824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.757 [2024-07-15 10:33:31.199837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.757 [2024-07-15 10:33:31.199849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with t[2024-07-15 10:33:31.199869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:1he state(5) to be set 00:20:36.757 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.757 [2024-07-15 10:33:31.199891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with t[2024-07-15 10:33:31.199892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.757 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.757 [2024-07-15 10:33:31.199907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.757 [2024-07-15 10:33:31.199919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.757 [2024-07-15 10:33:31.199932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.757 [2024-07-15 10:33:31.199944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.199957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.757 he state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with t[2024-07-15 10:33:31.199973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1he state(5) to be set 00:20:36.757 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.757 [2024-07-15 10:33:31.199987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.757 [2024-07-15 10:33:31.199989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.757 [2024-07-15 10:33:31.200000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.758 [2024-07-15 10:33:31.200005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.758 [2024-07-15 10:33:31.200019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.758 [2024-07-15 10:33:31.200035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1[2024-07-15 10:33:31.200036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 he state(5) to be set 00:20:36.758 [2024-07-15 10:33:31.200051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.200051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 he state(5) to be set 00:20:36.758 [2024-07-15 10:33:31.200066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.758 [2024-07-15 10:33:31.200068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.758 [2024-07-15 10:33:31.200082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.758 [2024-07-15 10:33:31.200098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b440 is same with the state(5) to be set 00:20:36.758 [2024-07-15 10:33:31.200112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.200974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.200988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.201002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.201016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.201030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.201044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.201058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.201073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.201086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.758 [2024-07-15 10:33:31.201100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.758 [2024-07-15 10:33:31.201113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-07-15 10:33:31.201394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 he state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.201409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 he state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.201473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 he state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-07-15 10:33:31.201548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 he state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.201563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 he state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.759 [2024-07-15 10:33:31.201615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.759 [2024-07-15 10:33:31.201627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201697] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11a2180 was disconnected and freed. reset controller. 00:20:36.759 [2024-07-15 10:33:31.201712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.759 [2024-07-15 10:33:31.201960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.201972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.201984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.201996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.202012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.202025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.202036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.202048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.202059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.202071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64b900 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.202068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:36.760 [2024-07-15 10:33:31.202109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1056830 (9): Bad file descriptor 00:20:36.760 [2024-07-15 10:33:31.203403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with t[2024-07-15 10:33:31.203747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controllehe state(5) to be set 00:20:36.760 r 00:20:36.760 [2024-07-15 10:33:31.203769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with t[2024-07-15 10:33:31.203811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107ab30 (9): he state(5) to be set 00:20:36.760 Bad file descriptor 00:20:36.760 [2024-07-15 10:33:31.203836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.203996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64bda0 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204603] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:36.760 [2024-07-15 10:33:31.204790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.760 [2024-07-15 10:33:31.204819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1056830 with addr=10.0.0.2, port=4420 00:20:36.760 [2024-07-15 10:33:31.204835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1056830 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.204885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e4d30 (9): Bad file descriptor 00:20:36.760 [2024-07-15 10:33:31.204954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.760 [2024-07-15 10:33:31.204975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.760 [2024-07-15 10:33:31.204989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.760 [2024-07-15 10:33:31.205002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.760 [2024-07-15 10:33:31.205016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.760 [2024-07-15 10:33:31.205028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.760 [2024-07-15 10:33:31.205041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.760 [2024-07-15 10:33:31.205054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.760 [2024-07-15 10:33:31.205071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1078c60 is same with the state(5) to be set 00:20:36.760 [2024-07-15 10:33:31.205121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.760 [2024-07-15 10:33:31.205141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.760 [2024-07-15 10:33:31.205166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.760 [2024-07-15 10:33:31.205179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107ad30 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with t[2024-07-15 10:33:31.205309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:20:36.761 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-15 10:33:31.205540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 he state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with t[2024-07-15 10:33:31.205555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:20:36.761 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with t[2024-07-15 10:33:31.205570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082120 is same he state(5) to be set 00:20:36.761 with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 10:33:31.205622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with tid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 he state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.761 [2024-07-15 10:33:31.205708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.761 [2024-07-15 10:33:31.205720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111ebd0 is same [2024-07-15 10:33:31.205733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with twith the state(5) to be set 00:20:36.761 he state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.761 [2024-07-15 10:33:31.205780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205833] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:36.762 [2024-07-15 10:33:31.205841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205912] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:36.762 [2024-07-15 10:33:31.205926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.205984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c240 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.762 [2024-07-15 10:33:31.206687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107ab30 with addr=10.0.0.2, port=4420 00:20:36.762 [2024-07-15 10:33:31.206714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107ab30 is same [2024-07-15 10:33:31.206726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with twith the state(5) to be set 00:20:36.762 he state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1056830 (9): Bad file descriptor 00:20:36.762 [2024-07-15 10:33:31.206751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-07-15 10:33:31.206814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 he state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 [2024-07-15 10:33:31.206840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t[2024-07-15 10:33:31.206858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128he state(5) to be set 00:20:36.762 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.206890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 [2024-07-15 10:33:31.206903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.206915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.206928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 he state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.206952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 [2024-07-15 10:33:31.206964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.206976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.206987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.206988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 he state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.207017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 [2024-07-15 10:33:31.207029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.207040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.207052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 he state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.207077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 [2024-07-15 10:33:31.207089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.207101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 10:33:31.207113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 he state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.207138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 [2024-07-15 10:33:31.207150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.207180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 [2024-07-15 10:33:31.207195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.762 [2024-07-15 10:33:31.207207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.762 [2024-07-15 10:33:31.207215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.762 [2024-07-15 10:33:31.207220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:12[2024-07-15 10:33:31.207232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 he state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t[2024-07-15 10:33:31.207246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.763 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12[2024-07-15 10:33:31.207296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 he state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t[2024-07-15 10:33:31.207309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.763 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:12[2024-07-15 10:33:31.207358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 he state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:12[2024-07-15 10:33:31.207402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 he state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t[2024-07-15 10:33:31.207415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.763 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with the state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-07-15 10:33:31.207463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 he state(5) to be set 00:20:36.763 [2024-07-15 10:33:31.207476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c6e0 is same with t[2024-07-15 10:33:31.207476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:20:36.763 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.207981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.207994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.208011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.208029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.208045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.208059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.208074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.208088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.208104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.208117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.208133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.208147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.208173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.208187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.208202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.208216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.208231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.763 [2024-07-15 10:33:31.208245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.763 [2024-07-15 10:33:31.208260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.208809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.208823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d88e0 is same with the state(5) to be set 00:20:36.764 [2024-07-15 10:33:31.208926] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11d88e0 was disconnected and freed. reset controller. 00:20:36.764 [2024-07-15 10:33:31.209004] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:36.764 [2024-07-15 10:33:31.209405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107ab30 (9): Bad file descriptor 00:20:36.764 [2024-07-15 10:33:31.209433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:36.764 [2024-07-15 10:33:31.209448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:36.764 [2024-07-15 10:33:31.209464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:36.764 [2024-07-15 10:33:31.210692] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:36.764 [2024-07-15 10:33:31.210910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.764 [2024-07-15 10:33:31.210935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:36.764 [2024-07-15 10:33:31.210957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082120 (9): Bad file descriptor 00:20:36.764 [2024-07-15 10:33:31.210977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:36.764 [2024-07-15 10:33:31.210990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:36.764 [2024-07-15 10:33:31.211003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:36.764 [2024-07-15 10:33:31.211238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.764 [2024-07-15 10:33:31.211583] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:36.764 [2024-07-15 10:33:31.211839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.764 [2024-07-15 10:33:31.211871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082120 with addr=10.0.0.2, port=4420 00:20:36.764 [2024-07-15 10:33:31.211896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082120 is same with the state(5) to be set 00:20:36.764 [2024-07-15 10:33:31.211957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.211978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.211999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.764 [2024-07-15 10:33:31.212409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.764 [2024-07-15 10:33:31.212422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.212975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.212988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.765 [2024-07-15 10:33:31.213437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.765 [2024-07-15 10:33:31.213450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.213465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.213478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.213493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.213506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.213522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.213538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.222976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.223031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.223065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.223094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.223124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.223154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.223190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.223219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.223248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.223277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.223308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4520 is same with the state(5) to be set 00:20:36.766 [2024-07-15 10:33:31.223457] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11a4520 was disconnected and freed. reset controller. 00:20:36.766 [2024-07-15 10:33:31.223597] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:36.766 [2024-07-15 10:33:31.223683] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:36.766 [2024-07-15 10:33:31.223753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082120 (9): Bad file descriptor 00:20:36.766 [2024-07-15 10:33:31.223904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.766 [2024-07-15 10:33:31.223926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.766 [2024-07-15 10:33:31.223955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.766 [2024-07-15 10:33:31.223982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.223995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.766 [2024-07-15 10:33:31.224008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.224020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1217240 is same with the state(5) to be set 00:20:36.766 [2024-07-15 10:33:31.224052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1078c60 (9): Bad file descriptor 00:20:36.766 [2024-07-15 10:33:31.224083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107ad30 (9): Bad file descriptor 00:20:36.766 [2024-07-15 10:33:31.224114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1222240 (9): Bad file descriptor 00:20:36.766 [2024-07-15 10:33:31.224144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x111ebd0 (9): Bad file descriptor 00:20:36.766 [2024-07-15 10:33:31.224198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.766 [2024-07-15 10:33:31.224218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.224233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.766 [2024-07-15 10:33:31.224246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.224260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.766 [2024-07-15 10:33:31.224272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.224286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.766 [2024-07-15 10:33:31.224299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.224311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1216730 is same with the state(5) to be set 00:20:36.766 [2024-07-15 10:33:31.224345] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:36.766 [2024-07-15 10:33:31.225654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:36.766 [2024-07-15 10:33:31.225699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:36.766 [2024-07-15 10:33:31.225717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:36.766 [2024-07-15 10:33:31.225739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:36.766 [2024-07-15 10:33:31.225854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.225894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.225916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.225931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.225947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.225961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.225977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.225990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.226006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.226020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.226035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.226049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.226064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.226078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.226094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.226108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.226124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.226138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.226153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.226176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.226192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.226206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.226221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.226234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.226249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.226271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.766 [2024-07-15 10:33:31.226289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.766 [2024-07-15 10:33:31.226303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.226979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.226993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.767 [2024-07-15 10:33:31.227563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.767 [2024-07-15 10:33:31.227578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.227591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.227607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.227621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.227637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.227650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.227666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.227679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.227695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.227708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.227723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.227737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.227752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.227769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.227785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.227799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.227813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10900f0 is same with the state(5) to be set 00:20:36.768 [2024-07-15 10:33:31.230013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:36.768 [2024-07-15 10:33:31.230043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:36.768 [2024-07-15 10:33:31.230063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.768 [2024-07-15 10:33:31.230077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:36.768 [2024-07-15 10:33:31.230300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.768 [2024-07-15 10:33:31.230329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebd0 with addr=10.0.0.2, port=4420 00:20:36.768 [2024-07-15 10:33:31.230347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111ebd0 is same with the state(5) to be set 00:20:36.768 [2024-07-15 10:33:31.230835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.768 [2024-07-15 10:33:31.230864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1056830 with addr=10.0.0.2, port=4420 00:20:36.768 [2024-07-15 10:33:31.230892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1056830 is same with the state(5) to be set 00:20:36.768 [2024-07-15 10:33:31.231040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.768 [2024-07-15 10:33:31.231067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107ab30 with addr=10.0.0.2, port=4420 00:20:36.768 [2024-07-15 10:33:31.231082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107ab30 is same with the state(5) to be set 00:20:36.768 [2024-07-15 10:33:31.231231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.768 [2024-07-15 10:33:31.231257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e4d30 with addr=10.0.0.2, port=4420 00:20:36.768 [2024-07-15 10:33:31.231273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e4d30 is same with the state(5) to be set 00:20:36.768 [2024-07-15 10:33:31.231295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x111ebd0 (9): Bad file descriptor 00:20:36.768 [2024-07-15 10:33:31.231615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1056830 (9): Bad file descriptor 00:20:36.768 [2024-07-15 10:33:31.231643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107ab30 (9): Bad file descriptor 00:20:36.768 [2024-07-15 10:33:31.231662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e4d30 (9): Bad file descriptor 00:20:36.768 [2024-07-15 10:33:31.231678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:36.768 [2024-07-15 10:33:31.231691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:36.768 [2024-07-15 10:33:31.231706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:36.768 [2024-07-15 10:33:31.231781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.768 [2024-07-15 10:33:31.231802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:36.768 [2024-07-15 10:33:31.231821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:36.768 [2024-07-15 10:33:31.231835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:36.768 [2024-07-15 10:33:31.231853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:36.768 [2024-07-15 10:33:31.231867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:36.768 [2024-07-15 10:33:31.231891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:36.768 [2024-07-15 10:33:31.231911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:36.768 [2024-07-15 10:33:31.231925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:36.768 [2024-07-15 10:33:31.231937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:36.768 [2024-07-15 10:33:31.231991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.768 [2024-07-15 10:33:31.232010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.768 [2024-07-15 10:33:31.232022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.768 [2024-07-15 10:33:31.233774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1217240 (9): Bad file descriptor 00:20:36.768 [2024-07-15 10:33:31.233836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1216730 (9): Bad file descriptor 00:20:36.768 [2024-07-15 10:33:31.233998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.768 [2024-07-15 10:33:31.234532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.768 [2024-07-15 10:33:31.234546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.234967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.234983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.769 [2024-07-15 10:33:31.235630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.769 [2024-07-15 10:33:31.235646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.235659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.235675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.235689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.235704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.235719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.235735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.235751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.235767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.235781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.235797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.235811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.235826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.235840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.235856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.235869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.235892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.235906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.235925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.235939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.235954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d7450 is same with the state(5) to be set 00:20:36.770 [2024-07-15 10:33:31.237252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.237980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.237994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.238010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.238023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.238039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.238052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.238068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.238081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.238097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.238110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.238125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.238138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.238154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.238178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.238193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.238211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.238228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.770 [2024-07-15 10:33:31.238242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.770 [2024-07-15 10:33:31.238258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.238980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.238993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.239008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.239021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.239037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.239050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.239066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.239080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.239095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.239108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.239124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.239137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.239152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.239166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.239193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.239207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.239222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a3610 is same with the state(5) to be set 00:20:36.771 [2024-07-15 10:33:31.240462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.240485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.240507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.240522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.240538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.240552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.240567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.240585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.240602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.240616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.240632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.240645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.240661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.240674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.240690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.240703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.240719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.240732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.240747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.771 [2024-07-15 10:33:31.240761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.771 [2024-07-15 10:33:31.240777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.240790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.240805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.240819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.240834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.240848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.240864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.240884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.240902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.240917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.240940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.240974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.240998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.241979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.241994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.242007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.242023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.242036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.242051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.242064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.772 [2024-07-15 10:33:31.242080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.772 [2024-07-15 10:33:31.242094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.242422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.242437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051fc0 is same with the state(5) to be set 00:20:36.773 [2024-07-15 10:33:31.243712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:36.773 [2024-07-15 10:33:31.243744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:36.773 [2024-07-15 10:33:31.243764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:36.773 [2024-07-15 10:33:31.243782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:36.773 [2024-07-15 10:33:31.244275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.773 [2024-07-15 10:33:31.244306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082120 with addr=10.0.0.2, port=4420 00:20:36.773 [2024-07-15 10:33:31.244323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082120 is same with the state(5) to be set 00:20:36.773 [2024-07-15 10:33:31.244454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.773 [2024-07-15 10:33:31.244479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1222240 with addr=10.0.0.2, port=4420 00:20:36.773 [2024-07-15 10:33:31.244494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222240 is same with the state(5) to be set 00:20:36.773 [2024-07-15 10:33:31.244619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.773 [2024-07-15 10:33:31.244644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1078c60 with addr=10.0.0.2, port=4420 00:20:36.773 [2024-07-15 10:33:31.244659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1078c60 is same with the state(5) to be set 00:20:36.773 [2024-07-15 10:33:31.244780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.773 [2024-07-15 10:33:31.244804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107ad30 with addr=10.0.0.2, port=4420 00:20:36.773 [2024-07-15 10:33:31.244819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107ad30 is same with the state(5) to be set 00:20:36.773 [2024-07-15 10:33:31.245684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:36.773 [2024-07-15 10:33:31.245710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:36.773 [2024-07-15 10:33:31.245727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:36.773 [2024-07-15 10:33:31.245743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:36.773 [2024-07-15 10:33:31.245802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082120 (9): Bad file descriptor 00:20:36.773 [2024-07-15 10:33:31.245826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1222240 (9): Bad file descriptor 00:20:36.773 [2024-07-15 10:33:31.245845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1078c60 (9): Bad file descriptor 00:20:36.773 [2024-07-15 10:33:31.245862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107ad30 (9): Bad file descriptor 00:20:36.773 [2024-07-15 10:33:31.245963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.245986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.773 [2024-07-15 10:33:31.246551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.773 [2024-07-15 10:33:31.246567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.246972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.246986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.774 [2024-07-15 10:33:31.247664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.774 [2024-07-15 10:33:31.247677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.247693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.247707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.247721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a59d0 is same with the state(5) to be set 00:20:36.775 [2024-07-15 10:33:31.248976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.248999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.249971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.249988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.250004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.250018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.250033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.250047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.250062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.250075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.250091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.250104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.250119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.250133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.250148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.250161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.775 [2024-07-15 10:33:31.250177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.775 [2024-07-15 10:33:31.250190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.776 [2024-07-15 10:33:31.250864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.776 [2024-07-15 10:33:31.250892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6c80 is same with the state(5) to be set 00:20:36.776 [2024-07-15 10:33:31.253385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:36.776 task offset: 30720 on job bdev=Nvme1n1 fails 00:20:36.776 00:20:36.776 Latency(us) 00:20:36.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.776 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.776 Job: Nvme1n1 ended in about 0.87 seconds with error 00:20:36.776 Verification LBA range: start 0x0 length 0x400 00:20:36.776 Nvme1n1 : 0.87 221.46 13.84 73.82 0.00 214161.68 4053.52 246997.90 00:20:36.776 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.776 Job: Nvme2n1 ended in about 0.90 seconds with error 00:20:36.776 Verification LBA range: start 0x0 length 0x400 00:20:36.776 Nvme2n1 : 0.90 141.51 8.84 70.76 0.00 292105.73 20291.89 256318.58 00:20:36.776 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.776 Job: Nvme3n1 ended in about 0.88 seconds with error 00:20:36.776 Verification LBA range: start 0x0 length 0x400 00:20:36.776 Nvme3n1 : 0.88 218.66 13.67 72.89 0.00 207799.37 7427.41 251658.24 00:20:36.776 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.776 Job: Nvme4n1 ended in about 0.87 seconds with error 00:20:36.776 Verification LBA range: start 0x0 length 0x400 00:20:36.776 Nvme4n1 : 0.87 220.46 13.78 73.49 0.00 201411.13 4611.79 256318.58 00:20:36.776 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.776 Job: Nvme5n1 ended in about 0.91 seconds with error 00:20:36.776 Verification LBA range: start 0x0 length 0x400 00:20:36.776 Nvme5n1 : 0.91 141.01 8.81 70.51 0.00 274761.96 18835.53 257872.02 00:20:36.776 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.776 Job: Nvme6n1 ended in about 0.91 seconds with error 00:20:36.776 Verification LBA range: start 0x0 length 0x400 00:20:36.776 Nvme6n1 : 0.91 140.51 8.78 70.26 0.00 269851.56 39224.51 273406.48 00:20:36.776 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.776 Job: Nvme7n1 ended in about 0.89 seconds with error 00:20:36.776 Verification LBA range: start 0x0 length 0x400 00:20:36.776 Nvme7n1 : 0.89 147.82 9.24 71.67 0.00 252634.10 20971.52 270299.59 00:20:36.776 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.776 Job: Nvme8n1 ended in about 0.92 seconds with error 00:20:36.776 Verification LBA range: start 0x0 length 0x400 00:20:36.776 Nvme8n1 : 0.92 146.25 9.14 63.30 0.00 258473.02 16990.81 270299.59 00:20:36.776 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.776 Job: Nvme9n1 ended in about 0.92 seconds with error 00:20:36.776 Verification LBA range: start 0x0 length 0x400 00:20:36.776 Nvme9n1 : 0.92 139.22 8.70 69.61 0.00 254865.26 20971.52 256318.58 00:20:36.776 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.776 Job: Nvme10n1 ended in about 0.90 seconds with error 00:20:36.776 Verification LBA range: start 0x0 length 0x400 00:20:36.776 Nvme10n1 : 0.90 142.80 8.92 71.40 0.00 241338.85 21845.33 290494.39 00:20:36.776 =================================================================================================================== 00:20:36.776 Total : 1659.72 103.73 707.70 0.00 243217.23 4053.52 290494.39 00:20:36.776 [2024-07-15 10:33:31.281220] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:36.776 [2024-07-15 10:33:31.281313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:36.776 [2024-07-15 10:33:31.281678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.776 [2024-07-15 10:33:31.281717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x111ebd0 with addr=10.0.0.2, port=4420 00:20:36.776 [2024-07-15 10:33:31.281739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x111ebd0 is same with the state(5) to be set 00:20:36.776 [2024-07-15 10:33:31.281922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.776 [2024-07-15 10:33:31.281950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e4d30 with addr=10.0.0.2, port=4420 00:20:36.776 [2024-07-15 10:33:31.281966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e4d30 is same with the state(5) to be set 00:20:36.776 [2024-07-15 10:33:31.282115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.776 [2024-07-15 10:33:31.282140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107ab30 with addr=10.0.0.2, port=4420 00:20:36.776 [2024-07-15 10:33:31.282156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107ab30 is same with the state(5) to be set 00:20:36.776 [2024-07-15 10:33:31.282279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.776 [2024-07-15 10:33:31.282304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1056830 with addr=10.0.0.2, port=4420 00:20:36.776 [2024-07-15 10:33:31.282320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1056830 is same with the state(5) to be set 00:20:36.777 [2024-07-15 10:33:31.282336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.282349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.282366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:36.777 [2024-07-15 10:33:31.282395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.282409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.282422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:36.777 [2024-07-15 10:33:31.282455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.282469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.282483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:36.777 [2024-07-15 10:33:31.282500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.282513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.282526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:36.777 [2024-07-15 10:33:31.282711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.282735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.282747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.282758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.282919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.777 [2024-07-15 10:33:31.282946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1217240 with addr=10.0.0.2, port=4420 00:20:36.777 [2024-07-15 10:33:31.282962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1217240 is same with the state(5) to be set 00:20:36.777 [2024-07-15 10:33:31.283069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.777 [2024-07-15 10:33:31.283095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1216730 with addr=10.0.0.2, port=4420 00:20:36.777 [2024-07-15 10:33:31.283110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1216730 is same with the state(5) to be set 00:20:36.777 [2024-07-15 10:33:31.283136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x111ebd0 (9): Bad file descriptor 00:20:36.777 [2024-07-15 10:33:31.283159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e4d30 (9): Bad file descriptor 00:20:36.777 [2024-07-15 10:33:31.283177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107ab30 (9): Bad file descriptor 00:20:36.777 [2024-07-15 10:33:31.283194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1056830 (9): Bad file descriptor 00:20:36.777 [2024-07-15 10:33:31.283249] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:36.777 [2024-07-15 10:33:31.283278] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:36.777 [2024-07-15 10:33:31.283297] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:36.777 [2024-07-15 10:33:31.283314] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:36.777 [2024-07-15 10:33:31.283957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1217240 (9): Bad file descriptor 00:20:36.777 [2024-07-15 10:33:31.283987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1216730 (9): Bad file descriptor 00:20:36.777 [2024-07-15 10:33:31.284005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.284017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.284030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:36.777 [2024-07-15 10:33:31.284049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.284068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.284081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:36.777 [2024-07-15 10:33:31.284098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.284111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.284123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:36.777 [2024-07-15 10:33:31.284139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.284152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.284164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:36.777 [2024-07-15 10:33:31.284528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:36.777 [2024-07-15 10:33:31.284557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:36.777 [2024-07-15 10:33:31.284574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:36.777 [2024-07-15 10:33:31.284589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:36.777 [2024-07-15 10:33:31.284605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.284617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.284627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.284665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.284681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.284693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:36.777 [2024-07-15 10:33:31.284709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.284723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.284735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:36.777 [2024-07-15 10:33:31.284778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.284805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.284820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.284947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.777 [2024-07-15 10:33:31.284974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107ad30 with addr=10.0.0.2, port=4420 00:20:36.777 [2024-07-15 10:33:31.284990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107ad30 is same with the state(5) to be set 00:20:36.777 [2024-07-15 10:33:31.285098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.777 [2024-07-15 10:33:31.285124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1078c60 with addr=10.0.0.2, port=4420 00:20:36.777 [2024-07-15 10:33:31.285139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1078c60 is same with the state(5) to be set 00:20:36.777 [2024-07-15 10:33:31.285246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.777 [2024-07-15 10:33:31.285276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1222240 with addr=10.0.0.2, port=4420 00:20:36.777 [2024-07-15 10:33:31.285294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222240 is same with the state(5) to be set 00:20:36.777 [2024-07-15 10:33:31.285501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.777 [2024-07-15 10:33:31.285526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1082120 with addr=10.0.0.2, port=4420 00:20:36.777 [2024-07-15 10:33:31.285540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082120 is same with the state(5) to be set 00:20:36.777 [2024-07-15 10:33:31.285582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107ad30 (9): Bad file descriptor 00:20:36.777 [2024-07-15 10:33:31.285607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1078c60 (9): Bad file descriptor 00:20:36.777 [2024-07-15 10:33:31.285626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1222240 (9): Bad file descriptor 00:20:36.777 [2024-07-15 10:33:31.285643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1082120 (9): Bad file descriptor 00:20:36.777 [2024-07-15 10:33:31.285682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.285700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.285712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:36.777 [2024-07-15 10:33:31.285729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.285743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.285757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:36.777 [2024-07-15 10:33:31.285772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.285784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.285797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:36.777 [2024-07-15 10:33:31.285811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:36.777 [2024-07-15 10:33:31.285825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:36.777 [2024-07-15 10:33:31.285838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:36.777 [2024-07-15 10:33:31.285874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.285898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.285910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.777 [2024-07-15 10:33:31.285923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.345 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:37.345 10:33:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2356640 00:20:38.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2356640) - No such process 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.280 rmmod nvme_tcp 00:20:38.280 rmmod nvme_fabrics 00:20:38.280 rmmod nvme_keyring 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:38.280 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:38.281 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:38.281 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:38.281 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:38.281 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:38.281 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:38.281 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.281 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.281 10:33:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.810 10:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:40.810 00:20:40.810 real 0m8.651s 00:20:40.810 user 0m23.573s 00:20:40.810 sys 0m1.518s 00:20:40.810 10:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:40.810 10:33:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:40.810 ************************************ 00:20:40.810 END TEST nvmf_shutdown_tc3 00:20:40.810 ************************************ 00:20:40.810 10:33:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:40.810 10:33:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:40.810 00:20:40.810 real 0m28.247s 00:20:40.810 user 1m20.627s 00:20:40.810 sys 0m6.230s 00:20:40.810 10:33:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:40.810 10:33:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:40.810 ************************************ 00:20:40.810 END TEST nvmf_shutdown 00:20:40.810 ************************************ 00:20:40.810 10:33:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:40.810 10:33:34 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:40.810 10:33:34 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.810 10:33:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:40.810 10:33:34 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:40.810 10:33:34 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.810 10:33:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:40.810 10:33:35 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:40.810 10:33:35 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:40.810 10:33:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:40.810 10:33:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.810 10:33:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:40.810 ************************************ 00:20:40.810 START TEST nvmf_multicontroller 00:20:40.810 ************************************ 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:40.810 * Looking for test storage... 00:20:40.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:40.810 10:33:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:40.811 10:33:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:42.712 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:42.712 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:42.712 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:42.712 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:42.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:20:42.712 00:20:42.712 --- 10.0.0.2 ping statistics --- 00:20:42.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.712 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:20:42.712 00:20:42.712 --- 10.0.0.1 ping statistics --- 00:20:42.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.712 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2359158 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2359158 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2359158 ']' 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.712 10:33:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.713 10:33:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.713 10:33:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:42.713 [2024-07-15 10:33:37.311493] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:42.713 [2024-07-15 10:33:37.311575] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.713 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.970 [2024-07-15 10:33:37.379823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:42.970 [2024-07-15 10:33:37.495458] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.970 [2024-07-15 10:33:37.495521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.970 [2024-07-15 10:33:37.495537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.970 [2024-07-15 10:33:37.495559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.970 [2024-07-15 10:33:37.495572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.970 [2024-07-15 10:33:37.495691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.970 [2024-07-15 10:33:37.495798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.970 [2024-07-15 10:33:37.495801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 [2024-07-15 10:33:38.269839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 Malloc0 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 [2024-07-15 10:33:38.337095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 [2024-07-15 10:33:38.344955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 Malloc1 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.904 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2359320 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2359320 /var/tmp/bdevperf.sock 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2359320 ']' 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.905 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.162 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.162 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:44.162 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:44.162 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.162 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.419 NVMe0n1 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.419 1 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.419 request: 00:20:44.419 { 00:20:44.419 "name": "NVMe0", 00:20:44.419 "trtype": "tcp", 00:20:44.419 "traddr": "10.0.0.2", 00:20:44.419 "adrfam": "ipv4", 00:20:44.419 "trsvcid": "4420", 00:20:44.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.419 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:44.419 "hostaddr": "10.0.0.2", 00:20:44.419 "hostsvcid": "60000", 00:20:44.419 "prchk_reftag": false, 00:20:44.419 "prchk_guard": false, 00:20:44.419 "hdgst": false, 00:20:44.419 "ddgst": false, 00:20:44.419 "method": "bdev_nvme_attach_controller", 00:20:44.419 "req_id": 1 00:20:44.419 } 00:20:44.419 Got JSON-RPC error response 00:20:44.419 response: 00:20:44.419 { 00:20:44.419 "code": -114, 00:20:44.419 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:44.419 } 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:44.419 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.420 10:33:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.420 request: 00:20:44.420 { 00:20:44.420 "name": "NVMe0", 00:20:44.420 "trtype": "tcp", 00:20:44.420 "traddr": "10.0.0.2", 00:20:44.420 "adrfam": "ipv4", 00:20:44.420 "trsvcid": "4420", 00:20:44.420 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.420 "hostaddr": "10.0.0.2", 00:20:44.420 "hostsvcid": "60000", 00:20:44.420 "prchk_reftag": false, 00:20:44.420 "prchk_guard": false, 00:20:44.420 "hdgst": false, 00:20:44.420 "ddgst": false, 00:20:44.420 "method": "bdev_nvme_attach_controller", 00:20:44.420 "req_id": 1 00:20:44.420 } 00:20:44.420 Got JSON-RPC error response 00:20:44.420 response: 00:20:44.420 { 00:20:44.420 "code": -114, 00:20:44.420 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:44.420 } 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.420 request: 00:20:44.420 { 00:20:44.420 "name": "NVMe0", 00:20:44.420 "trtype": "tcp", 00:20:44.420 "traddr": "10.0.0.2", 00:20:44.420 "adrfam": "ipv4", 00:20:44.420 "trsvcid": "4420", 00:20:44.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.420 "hostaddr": "10.0.0.2", 00:20:44.420 "hostsvcid": "60000", 00:20:44.420 "prchk_reftag": false, 00:20:44.420 "prchk_guard": false, 00:20:44.420 "hdgst": false, 00:20:44.420 "ddgst": false, 00:20:44.420 "multipath": "disable", 00:20:44.420 "method": "bdev_nvme_attach_controller", 00:20:44.420 "req_id": 1 00:20:44.420 } 00:20:44.420 Got JSON-RPC error response 00:20:44.420 response: 00:20:44.420 { 00:20:44.420 "code": -114, 00:20:44.420 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:44.420 } 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.420 request: 00:20:44.420 { 00:20:44.420 "name": "NVMe0", 00:20:44.420 "trtype": "tcp", 00:20:44.420 "traddr": "10.0.0.2", 00:20:44.420 "adrfam": "ipv4", 00:20:44.420 "trsvcid": "4420", 00:20:44.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.420 "hostaddr": "10.0.0.2", 00:20:44.420 "hostsvcid": "60000", 00:20:44.420 "prchk_reftag": false, 00:20:44.420 "prchk_guard": false, 00:20:44.420 "hdgst": false, 00:20:44.420 "ddgst": false, 00:20:44.420 "multipath": "failover", 00:20:44.420 "method": "bdev_nvme_attach_controller", 00:20:44.420 "req_id": 1 00:20:44.420 } 00:20:44.420 Got JSON-RPC error response 00:20:44.420 response: 00:20:44.420 { 00:20:44.420 "code": -114, 00:20:44.420 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:44.420 } 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.420 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.678 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.678 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:44.678 10:33:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:46.051 0 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2359320 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2359320 ']' 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2359320 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2359320 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2359320' 00:20:46.051 killing process with pid 2359320 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2359320 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2359320 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:46.051 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:46.051 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:46.051 [2024-07-15 10:33:38.449035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:46.051 [2024-07-15 10:33:38.449119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359320 ] 00:20:46.051 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.051 [2024-07-15 10:33:38.510786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.051 [2024-07-15 10:33:38.619709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.051 [2024-07-15 10:33:39.206188] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 552c8e66-6fd2-4081-96ba-11ae82b33fc4 already exists 00:20:46.051 [2024-07-15 10:33:39.206226] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:552c8e66-6fd2-4081-96ba-11ae82b33fc4 alias for bdev NVMe1n1 00:20:46.051 [2024-07-15 10:33:39.206256] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:46.051 Running I/O for 1 seconds... 00:20:46.051 00:20:46.052 Latency(us) 00:20:46.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.052 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:46.052 NVMe0n1 : 1.00 17956.79 70.14 0.00 0.00 7116.71 3325.35 12815.93 00:20:46.052 =================================================================================================================== 00:20:46.052 Total : 17956.79 70.14 0.00 0.00 7116.71 3325.35 12815.93 00:20:46.052 Received shutdown signal, test time was about 1.000000 seconds 00:20:46.052 00:20:46.052 Latency(us) 00:20:46.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.052 =================================================================================================================== 00:20:46.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.052 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:46.052 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:46.052 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:46.052 10:33:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:46.052 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:46.052 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:46.052 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:46.052 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:46.052 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:46.052 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:46.052 rmmod nvme_tcp 00:20:46.310 rmmod nvme_fabrics 00:20:46.310 rmmod nvme_keyring 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2359158 ']' 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2359158 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2359158 ']' 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2359158 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2359158 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2359158' 00:20:46.310 killing process with pid 2359158 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2359158 00:20:46.310 10:33:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2359158 00:20:46.569 10:33:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:46.569 10:33:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.569 10:33:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.569 10:33:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.569 10:33:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.569 10:33:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.569 10:33:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.569 10:33:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.140 10:33:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:49.140 00:20:49.140 real 0m8.104s 00:20:49.140 user 0m13.797s 00:20:49.140 sys 0m2.317s 00:20:49.140 10:33:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:49.140 10:33:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.140 ************************************ 00:20:49.140 END TEST nvmf_multicontroller 00:20:49.140 ************************************ 00:20:49.140 10:33:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:49.140 10:33:43 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:49.140 10:33:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:49.140 10:33:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:49.140 10:33:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:49.140 ************************************ 00:20:49.140 START TEST nvmf_aer 00:20:49.140 ************************************ 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:49.140 * Looking for test storage... 00:20:49.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:49.140 10:33:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:50.512 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:50.512 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:50.512 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:50.512 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:50.512 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.513 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.513 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:50.513 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.513 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.513 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:50.513 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:50.513 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.513 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:50.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:20:50.771 00:20:50.771 --- 10.0.0.2 ping statistics --- 00:20:50.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.771 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:20:50.771 00:20:50.771 --- 10.0.0.1 ping statistics --- 00:20:50.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.771 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2361527 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2361527 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2361527 ']' 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.771 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:50.771 [2024-07-15 10:33:45.323781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:50.771 [2024-07-15 10:33:45.323847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.771 [2024-07-15 10:33:45.394034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.029 [2024-07-15 10:33:45.516560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.029 [2024-07-15 10:33:45.516622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.029 [2024-07-15 10:33:45.516636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.029 [2024-07-15 10:33:45.516664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.029 [2024-07-15 10:33:45.516675] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.029 [2024-07-15 10:33:45.516737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.029 [2024-07-15 10:33:45.516790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.029 [2024-07-15 10:33:45.516908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.029 [2024-07-15 10:33:45.516912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.029 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.029 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:51.029 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.029 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.029 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.029 10:33:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.029 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:51.029 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.029 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.287 [2024-07-15 10:33:45.680815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.287 Malloc0 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.287 [2024-07-15 10:33:45.734574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.287 [ 00:20:51.287 { 00:20:51.287 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:51.287 "subtype": "Discovery", 00:20:51.287 "listen_addresses": [], 00:20:51.287 "allow_any_host": true, 00:20:51.287 "hosts": [] 00:20:51.287 }, 00:20:51.287 { 00:20:51.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.287 "subtype": "NVMe", 00:20:51.287 "listen_addresses": [ 00:20:51.287 { 00:20:51.287 "trtype": "TCP", 00:20:51.287 "adrfam": "IPv4", 00:20:51.287 "traddr": "10.0.0.2", 00:20:51.287 "trsvcid": "4420" 00:20:51.287 } 00:20:51.287 ], 00:20:51.287 "allow_any_host": true, 00:20:51.287 "hosts": [], 00:20:51.287 "serial_number": "SPDK00000000000001", 00:20:51.287 "model_number": "SPDK bdev Controller", 00:20:51.287 "max_namespaces": 2, 00:20:51.287 "min_cntlid": 1, 00:20:51.287 "max_cntlid": 65519, 00:20:51.287 "namespaces": [ 00:20:51.287 { 00:20:51.287 "nsid": 1, 00:20:51.287 "bdev_name": "Malloc0", 00:20:51.287 "name": "Malloc0", 00:20:51.287 "nguid": "96237B3506E6497E8538B74293958166", 00:20:51.287 "uuid": "96237b35-06e6-497e-8538-b74293958166" 00:20:51.287 } 00:20:51.287 ] 00:20:51.287 } 00:20:51.287 ] 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2361573 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:51.287 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:51.287 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:51.546 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:51.546 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:20:51.546 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:20:51.546 10:33:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.546 Malloc1 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.546 [ 00:20:51.546 { 00:20:51.546 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:51.546 "subtype": "Discovery", 00:20:51.546 "listen_addresses": [], 00:20:51.546 "allow_any_host": true, 00:20:51.546 "hosts": [] 00:20:51.546 }, 00:20:51.546 { 00:20:51.546 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.546 "subtype": "NVMe", 00:20:51.546 "listen_addresses": [ 00:20:51.546 { 00:20:51.546 "trtype": "TCP", 00:20:51.546 "adrfam": "IPv4", 00:20:51.546 "traddr": "10.0.0.2", 00:20:51.546 "trsvcid": "4420" 00:20:51.546 } 00:20:51.546 ], 00:20:51.546 "allow_any_host": true, 00:20:51.546 "hosts": [], 00:20:51.546 "serial_number": "SPDK00000000000001", 00:20:51.546 "model_number": "SPDK bdev Controller", 00:20:51.546 "max_namespaces": 2, 00:20:51.546 "min_cntlid": 1, 00:20:51.546 "max_cntlid": 65519, 00:20:51.546 "namespaces": [ 00:20:51.546 { 00:20:51.546 "nsid": 1, 00:20:51.546 "bdev_name": "Malloc0", 00:20:51.546 "name": "Malloc0", 00:20:51.546 "nguid": "96237B3506E6497E8538B74293958166", 00:20:51.546 "uuid": "96237b35-06e6-497e-8538-b74293958166" 00:20:51.546 }, 00:20:51.546 { 00:20:51.546 "nsid": 2, 00:20:51.546 "bdev_name": "Malloc1", 00:20:51.546 "name": "Malloc1", 00:20:51.546 "nguid": "B84775E02C8345EA880B24F8D3C1360B", 00:20:51.546 "uuid": "b84775e0-2c83-45ea-880b-24f8d3c1360b" 00:20:51.546 } 00:20:51.546 ] 00:20:51.546 } 00:20:51.546 ] 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2361573 00:20:51.546 Asynchronous Event Request test 00:20:51.546 Attaching to 10.0.0.2 00:20:51.546 Attached to 10.0.0.2 00:20:51.546 Registering asynchronous event callbacks... 00:20:51.546 Starting namespace attribute notice tests for all controllers... 00:20:51.546 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:51.546 aer_cb - Changed Namespace 00:20:51.546 Cleaning up... 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.546 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.805 rmmod nvme_tcp 00:20:51.805 rmmod nvme_fabrics 00:20:51.805 rmmod nvme_keyring 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2361527 ']' 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2361527 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2361527 ']' 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2361527 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2361527 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2361527' 00:20:51.805 killing process with pid 2361527 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2361527 00:20:51.805 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2361527 00:20:52.063 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:52.063 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:52.063 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:52.063 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.063 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.063 10:33:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.063 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.063 10:33:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.964 10:33:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.964 00:20:53.964 real 0m5.395s 00:20:53.964 user 0m4.593s 00:20:53.964 sys 0m1.845s 00:20:53.964 10:33:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:53.964 10:33:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:53.964 ************************************ 00:20:53.964 END TEST nvmf_aer 00:20:53.964 ************************************ 00:20:53.964 10:33:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:53.964 10:33:48 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:53.964 10:33:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:53.964 10:33:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:53.964 10:33:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:54.221 ************************************ 00:20:54.221 START TEST nvmf_async_init 00:20:54.221 ************************************ 00:20:54.221 10:33:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:54.221 * Looking for test storage... 00:20:54.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:54.221 10:33:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.221 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:54.221 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.221 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.221 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.221 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.221 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.221 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.221 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=241d241d127f48c9b3cf5b1bec49efca 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.222 10:33:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:56.121 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:56.121 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:56.121 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:56.122 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:56.122 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:56.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:20:56.122 00:20:56.122 --- 10.0.0.2 ping statistics --- 00:20:56.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.122 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:20:56.122 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:20:56.380 00:20:56.380 --- 10.0.0.1 ping statistics --- 00:20:56.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.380 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2363614 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2363614 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2363614 ']' 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.380 10:33:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:56.380 [2024-07-15 10:33:50.852052] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:56.380 [2024-07-15 10:33:50.852129] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.380 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.380 [2024-07-15 10:33:50.921085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.637 [2024-07-15 10:33:51.035761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.637 [2024-07-15 10:33:51.035815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.637 [2024-07-15 10:33:51.035831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.637 [2024-07-15 10:33:51.035844] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.637 [2024-07-15 10:33:51.035855] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.637 [2024-07-15 10:33:51.035896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 [2024-07-15 10:33:51.799375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 null0 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 241d241d127f48c9b3cf5b1bec49efca 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 [2024-07-15 10:33:51.839602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.203 10:33:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.461 nvme0n1 00:20:57.461 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.461 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:57.461 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.461 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.461 [ 00:20:57.461 { 00:20:57.461 "name": "nvme0n1", 00:20:57.461 "aliases": [ 00:20:57.461 "241d241d-127f-48c9-b3cf-5b1bec49efca" 00:20:57.461 ], 00:20:57.461 "product_name": "NVMe disk", 00:20:57.461 "block_size": 512, 00:20:57.461 "num_blocks": 2097152, 00:20:57.461 "uuid": "241d241d-127f-48c9-b3cf-5b1bec49efca", 00:20:57.461 "assigned_rate_limits": { 00:20:57.461 "rw_ios_per_sec": 0, 00:20:57.461 "rw_mbytes_per_sec": 0, 00:20:57.461 "r_mbytes_per_sec": 0, 00:20:57.461 "w_mbytes_per_sec": 0 00:20:57.461 }, 00:20:57.461 "claimed": false, 00:20:57.461 "zoned": false, 00:20:57.461 "supported_io_types": { 00:20:57.461 "read": true, 00:20:57.461 "write": true, 00:20:57.461 "unmap": false, 00:20:57.461 "flush": true, 00:20:57.461 "reset": true, 00:20:57.461 "nvme_admin": true, 00:20:57.461 "nvme_io": true, 00:20:57.461 "nvme_io_md": false, 00:20:57.461 "write_zeroes": true, 00:20:57.461 "zcopy": false, 00:20:57.461 "get_zone_info": false, 00:20:57.461 "zone_management": false, 00:20:57.461 "zone_append": false, 00:20:57.461 "compare": true, 00:20:57.461 "compare_and_write": true, 00:20:57.461 "abort": true, 00:20:57.461 "seek_hole": false, 00:20:57.461 "seek_data": false, 00:20:57.461 "copy": true, 00:20:57.461 "nvme_iov_md": false 00:20:57.461 }, 00:20:57.461 "memory_domains": [ 00:20:57.461 { 00:20:57.461 "dma_device_id": "system", 00:20:57.461 "dma_device_type": 1 00:20:57.461 } 00:20:57.461 ], 00:20:57.461 "driver_specific": { 00:20:57.461 "nvme": [ 00:20:57.461 { 00:20:57.461 "trid": { 00:20:57.461 "trtype": "TCP", 00:20:57.461 "adrfam": "IPv4", 00:20:57.461 "traddr": "10.0.0.2", 00:20:57.461 "trsvcid": "4420", 00:20:57.461 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:57.461 }, 00:20:57.461 "ctrlr_data": { 00:20:57.461 "cntlid": 1, 00:20:57.461 "vendor_id": "0x8086", 00:20:57.461 "model_number": "SPDK bdev Controller", 00:20:57.461 "serial_number": "00000000000000000000", 00:20:57.461 "firmware_revision": "24.09", 00:20:57.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:57.461 "oacs": { 00:20:57.461 "security": 0, 00:20:57.461 "format": 0, 00:20:57.461 "firmware": 0, 00:20:57.461 "ns_manage": 0 00:20:57.461 }, 00:20:57.461 "multi_ctrlr": true, 00:20:57.461 "ana_reporting": false 00:20:57.461 }, 00:20:57.461 "vs": { 00:20:57.461 "nvme_version": "1.3" 00:20:57.461 }, 00:20:57.461 "ns_data": { 00:20:57.461 "id": 1, 00:20:57.461 "can_share": true 00:20:57.461 } 00:20:57.461 } 00:20:57.461 ], 00:20:57.461 "mp_policy": "active_passive" 00:20:57.461 } 00:20:57.461 } 00:20:57.461 ] 00:20:57.461 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.461 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:57.461 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.461 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.461 [2024-07-15 10:33:52.092752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:57.461 [2024-07-15 10:33:52.092842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc02090 (9): Bad file descriptor 00:20:57.718 [2024-07-15 10:33:52.235034] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:57.718 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.718 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:57.718 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.718 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.718 [ 00:20:57.718 { 00:20:57.718 "name": "nvme0n1", 00:20:57.718 "aliases": [ 00:20:57.718 "241d241d-127f-48c9-b3cf-5b1bec49efca" 00:20:57.718 ], 00:20:57.718 "product_name": "NVMe disk", 00:20:57.718 "block_size": 512, 00:20:57.718 "num_blocks": 2097152, 00:20:57.718 "uuid": "241d241d-127f-48c9-b3cf-5b1bec49efca", 00:20:57.718 "assigned_rate_limits": { 00:20:57.718 "rw_ios_per_sec": 0, 00:20:57.718 "rw_mbytes_per_sec": 0, 00:20:57.718 "r_mbytes_per_sec": 0, 00:20:57.718 "w_mbytes_per_sec": 0 00:20:57.718 }, 00:20:57.718 "claimed": false, 00:20:57.718 "zoned": false, 00:20:57.718 "supported_io_types": { 00:20:57.718 "read": true, 00:20:57.718 "write": true, 00:20:57.718 "unmap": false, 00:20:57.718 "flush": true, 00:20:57.718 "reset": true, 00:20:57.718 "nvme_admin": true, 00:20:57.718 "nvme_io": true, 00:20:57.718 "nvme_io_md": false, 00:20:57.718 "write_zeroes": true, 00:20:57.718 "zcopy": false, 00:20:57.718 "get_zone_info": false, 00:20:57.718 "zone_management": false, 00:20:57.718 "zone_append": false, 00:20:57.718 "compare": true, 00:20:57.718 "compare_and_write": true, 00:20:57.718 "abort": true, 00:20:57.718 "seek_hole": false, 00:20:57.718 "seek_data": false, 00:20:57.718 "copy": true, 00:20:57.718 "nvme_iov_md": false 00:20:57.718 }, 00:20:57.718 "memory_domains": [ 00:20:57.718 { 00:20:57.718 "dma_device_id": "system", 00:20:57.718 "dma_device_type": 1 00:20:57.718 } 00:20:57.718 ], 00:20:57.718 "driver_specific": { 00:20:57.718 "nvme": [ 00:20:57.718 { 00:20:57.718 "trid": { 00:20:57.718 "trtype": "TCP", 00:20:57.718 "adrfam": "IPv4", 00:20:57.718 "traddr": "10.0.0.2", 00:20:57.718 "trsvcid": "4420", 00:20:57.718 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:57.718 }, 00:20:57.718 "ctrlr_data": { 00:20:57.718 "cntlid": 2, 00:20:57.718 "vendor_id": "0x8086", 00:20:57.718 "model_number": "SPDK bdev Controller", 00:20:57.718 "serial_number": "00000000000000000000", 00:20:57.718 "firmware_revision": "24.09", 00:20:57.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:57.718 "oacs": { 00:20:57.718 "security": 0, 00:20:57.718 "format": 0, 00:20:57.718 "firmware": 0, 00:20:57.718 "ns_manage": 0 00:20:57.718 }, 00:20:57.718 "multi_ctrlr": true, 00:20:57.718 "ana_reporting": false 00:20:57.718 }, 00:20:57.718 "vs": { 00:20:57.718 "nvme_version": "1.3" 00:20:57.718 }, 00:20:57.718 "ns_data": { 00:20:57.718 "id": 1, 00:20:57.718 "can_share": true 00:20:57.718 } 00:20:57.718 } 00:20:57.718 ], 00:20:57.718 "mp_policy": "active_passive" 00:20:57.718 } 00:20:57.718 } 00:20:57.718 ] 00:20:57.718 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.718 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.718 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.mqpeaLszUU 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.mqpeaLszUU 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.719 [2024-07-15 10:33:52.285411] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.719 [2024-07-15 10:33:52.285538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mqpeaLszUU 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.719 [2024-07-15 10:33:52.293436] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mqpeaLszUU 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.719 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.719 [2024-07-15 10:33:52.301464] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.719 [2024-07-15 10:33:52.301522] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:57.976 nvme0n1 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.976 [ 00:20:57.976 { 00:20:57.976 "name": "nvme0n1", 00:20:57.976 "aliases": [ 00:20:57.976 "241d241d-127f-48c9-b3cf-5b1bec49efca" 00:20:57.976 ], 00:20:57.976 "product_name": "NVMe disk", 00:20:57.976 "block_size": 512, 00:20:57.976 "num_blocks": 2097152, 00:20:57.976 "uuid": "241d241d-127f-48c9-b3cf-5b1bec49efca", 00:20:57.976 "assigned_rate_limits": { 00:20:57.976 "rw_ios_per_sec": 0, 00:20:57.976 "rw_mbytes_per_sec": 0, 00:20:57.976 "r_mbytes_per_sec": 0, 00:20:57.976 "w_mbytes_per_sec": 0 00:20:57.976 }, 00:20:57.976 "claimed": false, 00:20:57.976 "zoned": false, 00:20:57.976 "supported_io_types": { 00:20:57.976 "read": true, 00:20:57.976 "write": true, 00:20:57.976 "unmap": false, 00:20:57.976 "flush": true, 00:20:57.976 "reset": true, 00:20:57.976 "nvme_admin": true, 00:20:57.976 "nvme_io": true, 00:20:57.976 "nvme_io_md": false, 00:20:57.976 "write_zeroes": true, 00:20:57.976 "zcopy": false, 00:20:57.976 "get_zone_info": false, 00:20:57.976 "zone_management": false, 00:20:57.976 "zone_append": false, 00:20:57.976 "compare": true, 00:20:57.976 "compare_and_write": true, 00:20:57.976 "abort": true, 00:20:57.976 "seek_hole": false, 00:20:57.976 "seek_data": false, 00:20:57.976 "copy": true, 00:20:57.976 "nvme_iov_md": false 00:20:57.976 }, 00:20:57.976 "memory_domains": [ 00:20:57.976 { 00:20:57.976 "dma_device_id": "system", 00:20:57.976 "dma_device_type": 1 00:20:57.976 } 00:20:57.976 ], 00:20:57.976 "driver_specific": { 00:20:57.976 "nvme": [ 00:20:57.976 { 00:20:57.976 "trid": { 00:20:57.976 "trtype": "TCP", 00:20:57.976 "adrfam": "IPv4", 00:20:57.976 "traddr": "10.0.0.2", 00:20:57.976 "trsvcid": "4421", 00:20:57.976 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:57.976 }, 00:20:57.976 "ctrlr_data": { 00:20:57.976 "cntlid": 3, 00:20:57.976 "vendor_id": "0x8086", 00:20:57.976 "model_number": "SPDK bdev Controller", 00:20:57.976 "serial_number": "00000000000000000000", 00:20:57.976 "firmware_revision": "24.09", 00:20:57.976 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:57.976 "oacs": { 00:20:57.976 "security": 0, 00:20:57.976 "format": 0, 00:20:57.976 "firmware": 0, 00:20:57.976 "ns_manage": 0 00:20:57.976 }, 00:20:57.976 "multi_ctrlr": true, 00:20:57.976 "ana_reporting": false 00:20:57.976 }, 00:20:57.976 "vs": { 00:20:57.976 "nvme_version": "1.3" 00:20:57.976 }, 00:20:57.976 "ns_data": { 00:20:57.976 "id": 1, 00:20:57.976 "can_share": true 00:20:57.976 } 00:20:57.976 } 00:20:57.976 ], 00:20:57.976 "mp_policy": "active_passive" 00:20:57.976 } 00:20:57.976 } 00:20:57.976 ] 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.mqpeaLszUU 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.976 rmmod nvme_tcp 00:20:57.976 rmmod nvme_fabrics 00:20:57.976 rmmod nvme_keyring 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2363614 ']' 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2363614 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2363614 ']' 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2363614 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2363614 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2363614' 00:20:57.976 killing process with pid 2363614 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2363614 00:20:57.976 [2024-07-15 10:33:52.506037] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.976 [2024-07-15 10:33:52.506075] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:57.976 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2363614 00:20:58.235 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:58.235 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:58.235 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:58.235 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.235 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:58.235 10:33:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.235 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.235 10:33:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.767 10:33:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:00.767 00:21:00.767 real 0m6.168s 00:21:00.767 user 0m2.926s 00:21:00.767 sys 0m1.837s 00:21:00.767 10:33:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.767 10:33:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:00.767 ************************************ 00:21:00.767 END TEST nvmf_async_init 00:21:00.767 ************************************ 00:21:00.767 10:33:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:00.767 10:33:54 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:00.767 10:33:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:00.767 10:33:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.767 10:33:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:00.767 ************************************ 00:21:00.767 START TEST dma 00:21:00.767 ************************************ 00:21:00.767 10:33:54 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:00.767 * Looking for test storage... 00:21:00.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:00.767 10:33:54 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.767 10:33:54 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.767 10:33:54 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.767 10:33:54 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.767 10:33:54 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.767 10:33:54 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.767 10:33:54 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.767 10:33:54 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:00.767 10:33:54 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.767 10:33:54 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.767 10:33:54 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:00.767 10:33:54 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:00.767 00:21:00.767 real 0m0.071s 00:21:00.767 user 0m0.033s 00:21:00.767 sys 0m0.043s 00:21:00.767 10:33:54 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.767 10:33:54 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:00.767 ************************************ 00:21:00.767 END TEST dma 00:21:00.767 ************************************ 00:21:00.767 10:33:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:00.767 10:33:54 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:00.767 10:33:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:00.768 10:33:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.768 10:33:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:00.768 ************************************ 00:21:00.768 START TEST nvmf_identify 00:21:00.768 ************************************ 00:21:00.768 10:33:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:00.768 * Looking for test storage... 00:21:00.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.768 10:33:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:02.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:02.670 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.670 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:02.671 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:02.671 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.671 10:33:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:21:02.671 00:21:02.671 --- 10.0.0.2 ping statistics --- 00:21:02.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.671 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:21:02.671 00:21:02.671 --- 10.0.0.1 ping statistics --- 00:21:02.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.671 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2365757 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2365757 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2365757 ']' 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.671 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.671 [2024-07-15 10:33:57.174252] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:02.671 [2024-07-15 10:33:57.174358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.671 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.671 [2024-07-15 10:33:57.244955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.929 [2024-07-15 10:33:57.356290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.929 [2024-07-15 10:33:57.356343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.929 [2024-07-15 10:33:57.356367] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.929 [2024-07-15 10:33:57.356377] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.929 [2024-07-15 10:33:57.356387] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.929 [2024-07-15 10:33:57.356444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.929 [2024-07-15 10:33:57.356504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.929 [2024-07-15 10:33:57.356567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.929 [2024-07-15 10:33:57.356570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.929 [2024-07-15 10:33:57.484454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.929 Malloc0 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.929 [2024-07-15 10:33:57.561474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.929 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:03.188 [ 00:21:03.188 { 00:21:03.188 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:03.188 "subtype": "Discovery", 00:21:03.188 "listen_addresses": [ 00:21:03.188 { 00:21:03.188 "trtype": "TCP", 00:21:03.188 "adrfam": "IPv4", 00:21:03.188 "traddr": "10.0.0.2", 00:21:03.188 "trsvcid": "4420" 00:21:03.188 } 00:21:03.188 ], 00:21:03.188 "allow_any_host": true, 00:21:03.188 "hosts": [] 00:21:03.188 }, 00:21:03.188 { 00:21:03.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.188 "subtype": "NVMe", 00:21:03.188 "listen_addresses": [ 00:21:03.188 { 00:21:03.188 "trtype": "TCP", 00:21:03.188 "adrfam": "IPv4", 00:21:03.189 "traddr": "10.0.0.2", 00:21:03.189 "trsvcid": "4420" 00:21:03.189 } 00:21:03.189 ], 00:21:03.189 "allow_any_host": true, 00:21:03.189 "hosts": [], 00:21:03.189 "serial_number": "SPDK00000000000001", 00:21:03.189 "model_number": "SPDK bdev Controller", 00:21:03.189 "max_namespaces": 32, 00:21:03.189 "min_cntlid": 1, 00:21:03.189 "max_cntlid": 65519, 00:21:03.189 "namespaces": [ 00:21:03.189 { 00:21:03.189 "nsid": 1, 00:21:03.189 "bdev_name": "Malloc0", 00:21:03.189 "name": "Malloc0", 00:21:03.189 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:03.189 "eui64": "ABCDEF0123456789", 00:21:03.189 "uuid": "cb0ef4d3-98e1-4a06-9b77-476d004af7e9" 00:21:03.189 } 00:21:03.189 ] 00:21:03.189 } 00:21:03.189 ] 00:21:03.189 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.189 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:03.189 [2024-07-15 10:33:57.604026] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:03.189 [2024-07-15 10:33:57.604069] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365892 ] 00:21:03.189 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.189 [2024-07-15 10:33:57.639315] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:03.189 [2024-07-15 10:33:57.639382] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:03.189 [2024-07-15 10:33:57.639392] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:03.189 [2024-07-15 10:33:57.639409] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:03.189 [2024-07-15 10:33:57.639420] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:03.189 [2024-07-15 10:33:57.639766] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:03.189 [2024-07-15 10:33:57.639822] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2247540 0 00:21:03.189 [2024-07-15 10:33:57.645907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:03.189 [2024-07-15 10:33:57.645928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:03.189 [2024-07-15 10:33:57.645936] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:03.189 [2024-07-15 10:33:57.645942] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:03.189 [2024-07-15 10:33:57.646015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.646031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.646039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247540) 00:21:03.189 [2024-07-15 10:33:57.646059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:03.189 [2024-07-15 10:33:57.646086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a73c0, cid 0, qid 0 00:21:03.189 [2024-07-15 10:33:57.656888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.189 [2024-07-15 10:33:57.656906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.189 [2024-07-15 10:33:57.656914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.656923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a73c0) on tqpair=0x2247540 00:21:03.189 [2024-07-15 10:33:57.656945] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:03.189 [2024-07-15 10:33:57.656969] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:03.189 [2024-07-15 10:33:57.656980] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:03.189 [2024-07-15 10:33:57.657005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247540) 00:21:03.189 [2024-07-15 10:33:57.657033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.189 [2024-07-15 10:33:57.657057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a73c0, cid 0, qid 0 00:21:03.189 [2024-07-15 10:33:57.657223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.189 [2024-07-15 10:33:57.657238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.189 [2024-07-15 10:33:57.657245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a73c0) on tqpair=0x2247540 00:21:03.189 [2024-07-15 10:33:57.657266] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:03.189 [2024-07-15 10:33:57.657280] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:03.189 [2024-07-15 10:33:57.657292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247540) 00:21:03.189 [2024-07-15 10:33:57.657317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.189 [2024-07-15 10:33:57.657338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a73c0, cid 0, qid 0 00:21:03.189 [2024-07-15 10:33:57.657450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.189 [2024-07-15 10:33:57.657465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.189 [2024-07-15 10:33:57.657472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a73c0) on tqpair=0x2247540 00:21:03.189 [2024-07-15 10:33:57.657487] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:03.189 [2024-07-15 10:33:57.657502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:03.189 [2024-07-15 10:33:57.657514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247540) 00:21:03.189 [2024-07-15 10:33:57.657539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.189 [2024-07-15 10:33:57.657560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a73c0, cid 0, qid 0 00:21:03.189 [2024-07-15 10:33:57.657677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.189 [2024-07-15 10:33:57.657692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.189 [2024-07-15 10:33:57.657699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a73c0) on tqpair=0x2247540 00:21:03.189 [2024-07-15 10:33:57.657716] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:03.189 [2024-07-15 10:33:57.657733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247540) 00:21:03.189 [2024-07-15 10:33:57.657759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.189 [2024-07-15 10:33:57.657779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a73c0, cid 0, qid 0 00:21:03.189 [2024-07-15 10:33:57.657891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.189 [2024-07-15 10:33:57.657905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.189 [2024-07-15 10:33:57.657912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.657918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a73c0) on tqpair=0x2247540 00:21:03.189 [2024-07-15 10:33:57.657927] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:03.189 [2024-07-15 10:33:57.657940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:03.189 [2024-07-15 10:33:57.657954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:03.189 [2024-07-15 10:33:57.658064] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:03.189 [2024-07-15 10:33:57.658072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:03.189 [2024-07-15 10:33:57.658088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.658096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.658102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247540) 00:21:03.189 [2024-07-15 10:33:57.658113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.189 [2024-07-15 10:33:57.658134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a73c0, cid 0, qid 0 00:21:03.189 [2024-07-15 10:33:57.658284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.189 [2024-07-15 10:33:57.658299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.189 [2024-07-15 10:33:57.658306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.658313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a73c0) on tqpair=0x2247540 00:21:03.189 [2024-07-15 10:33:57.658321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:03.189 [2024-07-15 10:33:57.658338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.658347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.658353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247540) 00:21:03.189 [2024-07-15 10:33:57.658364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.189 [2024-07-15 10:33:57.658384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a73c0, cid 0, qid 0 00:21:03.189 [2024-07-15 10:33:57.658504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.189 [2024-07-15 10:33:57.658516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.189 [2024-07-15 10:33:57.658522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.189 [2024-07-15 10:33:57.658529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a73c0) on tqpair=0x2247540 00:21:03.189 [2024-07-15 10:33:57.658537] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:03.189 [2024-07-15 10:33:57.658546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:03.190 [2024-07-15 10:33:57.658560] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:03.190 [2024-07-15 10:33:57.658574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:03.190 [2024-07-15 10:33:57.658591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.658599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247540) 00:21:03.190 [2024-07-15 10:33:57.658610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.190 [2024-07-15 10:33:57.658631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a73c0, cid 0, qid 0 00:21:03.190 [2024-07-15 10:33:57.658802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.190 [2024-07-15 10:33:57.658818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.190 [2024-07-15 10:33:57.658824] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.658831] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247540): datao=0, datal=4096, cccid=0 00:21:03.190 [2024-07-15 10:33:57.658840] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a73c0) on tqpair(0x2247540): expected_datao=0, payload_size=4096 00:21:03.190 [2024-07-15 10:33:57.658848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.658866] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.658882] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.190 [2024-07-15 10:33:57.700023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.190 [2024-07-15 10:33:57.700030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a73c0) on tqpair=0x2247540 00:21:03.190 [2024-07-15 10:33:57.700051] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:03.190 [2024-07-15 10:33:57.700065] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:03.190 [2024-07-15 10:33:57.700074] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:03.190 [2024-07-15 10:33:57.700083] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:03.190 [2024-07-15 10:33:57.700092] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:03.190 [2024-07-15 10:33:57.700100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:03.190 [2024-07-15 10:33:57.700115] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:03.190 [2024-07-15 10:33:57.700129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247540) 00:21:03.190 [2024-07-15 10:33:57.700155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:03.190 [2024-07-15 10:33:57.700177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a73c0, cid 0, qid 0 00:21:03.190 [2024-07-15 10:33:57.700295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.190 [2024-07-15 10:33:57.700307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.190 [2024-07-15 10:33:57.700314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a73c0) on tqpair=0x2247540 00:21:03.190 [2024-07-15 10:33:57.700333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2247540) 00:21:03.190 [2024-07-15 10:33:57.700357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.190 [2024-07-15 10:33:57.700367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2247540) 00:21:03.190 [2024-07-15 10:33:57.700394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.190 [2024-07-15 10:33:57.700404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2247540) 00:21:03.190 [2024-07-15 10:33:57.700426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.190 [2024-07-15 10:33:57.700435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247540) 00:21:03.190 [2024-07-15 10:33:57.700457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.190 [2024-07-15 10:33:57.700466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:03.190 [2024-07-15 10:33:57.700485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:03.190 [2024-07-15 10:33:57.700498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247540) 00:21:03.190 [2024-07-15 10:33:57.700531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.190 [2024-07-15 10:33:57.700553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a73c0, cid 0, qid 0 00:21:03.190 [2024-07-15 10:33:57.700564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7540, cid 1, qid 0 00:21:03.190 [2024-07-15 10:33:57.700571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a76c0, cid 2, qid 0 00:21:03.190 [2024-07-15 10:33:57.700594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7840, cid 3, qid 0 00:21:03.190 [2024-07-15 10:33:57.700602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a79c0, cid 4, qid 0 00:21:03.190 [2024-07-15 10:33:57.700779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.190 [2024-07-15 10:33:57.700791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.190 [2024-07-15 10:33:57.700798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a79c0) on tqpair=0x2247540 00:21:03.190 [2024-07-15 10:33:57.700815] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:03.190 [2024-07-15 10:33:57.700824] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:03.190 [2024-07-15 10:33:57.700842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.700851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247540) 00:21:03.190 [2024-07-15 10:33:57.700862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.190 [2024-07-15 10:33:57.704898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a79c0, cid 4, qid 0 00:21:03.190 [2024-07-15 10:33:57.704919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.190 [2024-07-15 10:33:57.704930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.190 [2024-07-15 10:33:57.704936] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.704946] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247540): datao=0, datal=4096, cccid=4 00:21:03.190 [2024-07-15 10:33:57.704954] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a79c0) on tqpair(0x2247540): expected_datao=0, payload_size=4096 00:21:03.190 [2024-07-15 10:33:57.704962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.704971] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.704979] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.704987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.190 [2024-07-15 10:33:57.704996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.190 [2024-07-15 10:33:57.705002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.705008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a79c0) on tqpair=0x2247540 00:21:03.190 [2024-07-15 10:33:57.705043] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:03.190 [2024-07-15 10:33:57.705086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.705097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247540) 00:21:03.190 [2024-07-15 10:33:57.705107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.190 [2024-07-15 10:33:57.705119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.705126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.705133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2247540) 00:21:03.190 [2024-07-15 10:33:57.705142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.190 [2024-07-15 10:33:57.705169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a79c0, cid 4, qid 0 00:21:03.190 [2024-07-15 10:33:57.705181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7b40, cid 5, qid 0 00:21:03.190 [2024-07-15 10:33:57.705375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.190 [2024-07-15 10:33:57.705387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.190 [2024-07-15 10:33:57.705394] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.705400] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247540): datao=0, datal=1024, cccid=4 00:21:03.190 [2024-07-15 10:33:57.705408] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a79c0) on tqpair(0x2247540): expected_datao=0, payload_size=1024 00:21:03.190 [2024-07-15 10:33:57.705415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.705425] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.705432] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.190 [2024-07-15 10:33:57.705440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.190 [2024-07-15 10:33:57.705449] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.190 [2024-07-15 10:33:57.705455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.705462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7b40) on tqpair=0x2247540 00:21:03.191 [2024-07-15 10:33:57.746012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.191 [2024-07-15 10:33:57.746030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.191 [2024-07-15 10:33:57.746037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a79c0) on tqpair=0x2247540 00:21:03.191 [2024-07-15 10:33:57.746064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247540) 00:21:03.191 [2024-07-15 10:33:57.746089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.191 [2024-07-15 10:33:57.746117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a79c0, cid 4, qid 0 00:21:03.191 [2024-07-15 10:33:57.746255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.191 [2024-07-15 10:33:57.746271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.191 [2024-07-15 10:33:57.746278] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746284] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247540): datao=0, datal=3072, cccid=4 00:21:03.191 [2024-07-15 10:33:57.746292] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a79c0) on tqpair(0x2247540): expected_datao=0, payload_size=3072 00:21:03.191 [2024-07-15 10:33:57.746299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746309] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746317] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.191 [2024-07-15 10:33:57.746338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.191 [2024-07-15 10:33:57.746345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a79c0) on tqpair=0x2247540 00:21:03.191 [2024-07-15 10:33:57.746368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2247540) 00:21:03.191 [2024-07-15 10:33:57.746387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.191 [2024-07-15 10:33:57.746414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a79c0, cid 4, qid 0 00:21:03.191 [2024-07-15 10:33:57.746543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.191 [2024-07-15 10:33:57.746555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.191 [2024-07-15 10:33:57.746561] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746568] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2247540): datao=0, datal=8, cccid=4 00:21:03.191 [2024-07-15 10:33:57.746575] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22a79c0) on tqpair(0x2247540): expected_datao=0, payload_size=8 00:21:03.191 [2024-07-15 10:33:57.746583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746593] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.746600] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.787000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.191 [2024-07-15 10:33:57.787019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.191 [2024-07-15 10:33:57.787026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.191 [2024-07-15 10:33:57.787033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a79c0) on tqpair=0x2247540 00:21:03.191 ===================================================== 00:21:03.191 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:03.191 ===================================================== 00:21:03.191 Controller Capabilities/Features 00:21:03.191 ================================ 00:21:03.191 Vendor ID: 0000 00:21:03.191 Subsystem Vendor ID: 0000 00:21:03.191 Serial Number: .................... 00:21:03.191 Model Number: ........................................ 00:21:03.191 Firmware Version: 24.09 00:21:03.191 Recommended Arb Burst: 0 00:21:03.191 IEEE OUI Identifier: 00 00 00 00:21:03.191 Multi-path I/O 00:21:03.191 May have multiple subsystem ports: No 00:21:03.191 May have multiple controllers: No 00:21:03.191 Associated with SR-IOV VF: No 00:21:03.191 Max Data Transfer Size: 131072 00:21:03.191 Max Number of Namespaces: 0 00:21:03.191 Max Number of I/O Queues: 1024 00:21:03.191 NVMe Specification Version (VS): 1.3 00:21:03.191 NVMe Specification Version (Identify): 1.3 00:21:03.191 Maximum Queue Entries: 128 00:21:03.191 Contiguous Queues Required: Yes 00:21:03.191 Arbitration Mechanisms Supported 00:21:03.191 Weighted Round Robin: Not Supported 00:21:03.191 Vendor Specific: Not Supported 00:21:03.191 Reset Timeout: 15000 ms 00:21:03.191 Doorbell Stride: 4 bytes 00:21:03.191 NVM Subsystem Reset: Not Supported 00:21:03.191 Command Sets Supported 00:21:03.191 NVM Command Set: Supported 00:21:03.191 Boot Partition: Not Supported 00:21:03.191 Memory Page Size Minimum: 4096 bytes 00:21:03.191 Memory Page Size Maximum: 4096 bytes 00:21:03.191 Persistent Memory Region: Not Supported 00:21:03.191 Optional Asynchronous Events Supported 00:21:03.191 Namespace Attribute Notices: Not Supported 00:21:03.191 Firmware Activation Notices: Not Supported 00:21:03.191 ANA Change Notices: Not Supported 00:21:03.191 PLE Aggregate Log Change Notices: Not Supported 00:21:03.191 LBA Status Info Alert Notices: Not Supported 00:21:03.191 EGE Aggregate Log Change Notices: Not Supported 00:21:03.191 Normal NVM Subsystem Shutdown event: Not Supported 00:21:03.191 Zone Descriptor Change Notices: Not Supported 00:21:03.191 Discovery Log Change Notices: Supported 00:21:03.191 Controller Attributes 00:21:03.191 128-bit Host Identifier: Not Supported 00:21:03.191 Non-Operational Permissive Mode: Not Supported 00:21:03.191 NVM Sets: Not Supported 00:21:03.191 Read Recovery Levels: Not Supported 00:21:03.191 Endurance Groups: Not Supported 00:21:03.191 Predictable Latency Mode: Not Supported 00:21:03.191 Traffic Based Keep ALive: Not Supported 00:21:03.191 Namespace Granularity: Not Supported 00:21:03.191 SQ Associations: Not Supported 00:21:03.191 UUID List: Not Supported 00:21:03.191 Multi-Domain Subsystem: Not Supported 00:21:03.191 Fixed Capacity Management: Not Supported 00:21:03.191 Variable Capacity Management: Not Supported 00:21:03.191 Delete Endurance Group: Not Supported 00:21:03.191 Delete NVM Set: Not Supported 00:21:03.191 Extended LBA Formats Supported: Not Supported 00:21:03.191 Flexible Data Placement Supported: Not Supported 00:21:03.191 00:21:03.191 Controller Memory Buffer Support 00:21:03.191 ================================ 00:21:03.191 Supported: No 00:21:03.191 00:21:03.191 Persistent Memory Region Support 00:21:03.191 ================================ 00:21:03.191 Supported: No 00:21:03.191 00:21:03.191 Admin Command Set Attributes 00:21:03.191 ============================ 00:21:03.191 Security Send/Receive: Not Supported 00:21:03.191 Format NVM: Not Supported 00:21:03.191 Firmware Activate/Download: Not Supported 00:21:03.191 Namespace Management: Not Supported 00:21:03.191 Device Self-Test: Not Supported 00:21:03.191 Directives: Not Supported 00:21:03.191 NVMe-MI: Not Supported 00:21:03.191 Virtualization Management: Not Supported 00:21:03.191 Doorbell Buffer Config: Not Supported 00:21:03.191 Get LBA Status Capability: Not Supported 00:21:03.191 Command & Feature Lockdown Capability: Not Supported 00:21:03.191 Abort Command Limit: 1 00:21:03.191 Async Event Request Limit: 4 00:21:03.191 Number of Firmware Slots: N/A 00:21:03.191 Firmware Slot 1 Read-Only: N/A 00:21:03.191 Firmware Activation Without Reset: N/A 00:21:03.191 Multiple Update Detection Support: N/A 00:21:03.191 Firmware Update Granularity: No Information Provided 00:21:03.191 Per-Namespace SMART Log: No 00:21:03.191 Asymmetric Namespace Access Log Page: Not Supported 00:21:03.191 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:03.191 Command Effects Log Page: Not Supported 00:21:03.191 Get Log Page Extended Data: Supported 00:21:03.191 Telemetry Log Pages: Not Supported 00:21:03.191 Persistent Event Log Pages: Not Supported 00:21:03.191 Supported Log Pages Log Page: May Support 00:21:03.191 Commands Supported & Effects Log Page: Not Supported 00:21:03.191 Feature Identifiers & Effects Log Page:May Support 00:21:03.191 NVMe-MI Commands & Effects Log Page: May Support 00:21:03.191 Data Area 4 for Telemetry Log: Not Supported 00:21:03.191 Error Log Page Entries Supported: 128 00:21:03.191 Keep Alive: Not Supported 00:21:03.191 00:21:03.191 NVM Command Set Attributes 00:21:03.191 ========================== 00:21:03.191 Submission Queue Entry Size 00:21:03.191 Max: 1 00:21:03.191 Min: 1 00:21:03.191 Completion Queue Entry Size 00:21:03.191 Max: 1 00:21:03.191 Min: 1 00:21:03.191 Number of Namespaces: 0 00:21:03.191 Compare Command: Not Supported 00:21:03.191 Write Uncorrectable Command: Not Supported 00:21:03.191 Dataset Management Command: Not Supported 00:21:03.191 Write Zeroes Command: Not Supported 00:21:03.191 Set Features Save Field: Not Supported 00:21:03.191 Reservations: Not Supported 00:21:03.191 Timestamp: Not Supported 00:21:03.191 Copy: Not Supported 00:21:03.191 Volatile Write Cache: Not Present 00:21:03.191 Atomic Write Unit (Normal): 1 00:21:03.191 Atomic Write Unit (PFail): 1 00:21:03.191 Atomic Compare & Write Unit: 1 00:21:03.191 Fused Compare & Write: Supported 00:21:03.191 Scatter-Gather List 00:21:03.191 SGL Command Set: Supported 00:21:03.191 SGL Keyed: Supported 00:21:03.192 SGL Bit Bucket Descriptor: Not Supported 00:21:03.192 SGL Metadata Pointer: Not Supported 00:21:03.192 Oversized SGL: Not Supported 00:21:03.192 SGL Metadata Address: Not Supported 00:21:03.192 SGL Offset: Supported 00:21:03.192 Transport SGL Data Block: Not Supported 00:21:03.192 Replay Protected Memory Block: Not Supported 00:21:03.192 00:21:03.192 Firmware Slot Information 00:21:03.192 ========================= 00:21:03.192 Active slot: 0 00:21:03.192 00:21:03.192 00:21:03.192 Error Log 00:21:03.192 ========= 00:21:03.192 00:21:03.192 Active Namespaces 00:21:03.192 ================= 00:21:03.192 Discovery Log Page 00:21:03.192 ================== 00:21:03.192 Generation Counter: 2 00:21:03.192 Number of Records: 2 00:21:03.192 Record Format: 0 00:21:03.192 00:21:03.192 Discovery Log Entry 0 00:21:03.192 ---------------------- 00:21:03.192 Transport Type: 3 (TCP) 00:21:03.192 Address Family: 1 (IPv4) 00:21:03.192 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:03.192 Entry Flags: 00:21:03.192 Duplicate Returned Information: 1 00:21:03.192 Explicit Persistent Connection Support for Discovery: 1 00:21:03.192 Transport Requirements: 00:21:03.192 Secure Channel: Not Required 00:21:03.192 Port ID: 0 (0x0000) 00:21:03.192 Controller ID: 65535 (0xffff) 00:21:03.192 Admin Max SQ Size: 128 00:21:03.192 Transport Service Identifier: 4420 00:21:03.192 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:03.192 Transport Address: 10.0.0.2 00:21:03.192 Discovery Log Entry 1 00:21:03.192 ---------------------- 00:21:03.192 Transport Type: 3 (TCP) 00:21:03.192 Address Family: 1 (IPv4) 00:21:03.192 Subsystem Type: 2 (NVM Subsystem) 00:21:03.192 Entry Flags: 00:21:03.192 Duplicate Returned Information: 0 00:21:03.192 Explicit Persistent Connection Support for Discovery: 0 00:21:03.192 Transport Requirements: 00:21:03.192 Secure Channel: Not Required 00:21:03.192 Port ID: 0 (0x0000) 00:21:03.192 Controller ID: 65535 (0xffff) 00:21:03.192 Admin Max SQ Size: 128 00:21:03.192 Transport Service Identifier: 4420 00:21:03.192 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:03.192 Transport Address: 10.0.0.2 [2024-07-15 10:33:57.787156] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:03.192 [2024-07-15 10:33:57.787180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a73c0) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.787193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.192 [2024-07-15 10:33:57.787202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7540) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.787210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.192 [2024-07-15 10:33:57.787222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a76c0) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.787230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.192 [2024-07-15 10:33:57.787238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7840) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.787246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.192 [2024-07-15 10:33:57.787264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247540) 00:21:03.192 [2024-07-15 10:33:57.787307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.192 [2024-07-15 10:33:57.787331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7840, cid 3, qid 0 00:21:03.192 [2024-07-15 10:33:57.787480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.192 [2024-07-15 10:33:57.787492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.192 [2024-07-15 10:33:57.787499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7840) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.787519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247540) 00:21:03.192 [2024-07-15 10:33:57.787544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.192 [2024-07-15 10:33:57.787570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7840, cid 3, qid 0 00:21:03.192 [2024-07-15 10:33:57.787702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.192 [2024-07-15 10:33:57.787717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.192 [2024-07-15 10:33:57.787724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7840) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.787739] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:03.192 [2024-07-15 10:33:57.787747] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:03.192 [2024-07-15 10:33:57.787764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247540) 00:21:03.192 [2024-07-15 10:33:57.787790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.192 [2024-07-15 10:33:57.787810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7840, cid 3, qid 0 00:21:03.192 [2024-07-15 10:33:57.787921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.192 [2024-07-15 10:33:57.787934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.192 [2024-07-15 10:33:57.787941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7840) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.787965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.787985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247540) 00:21:03.192 [2024-07-15 10:33:57.787996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.192 [2024-07-15 10:33:57.788016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7840, cid 3, qid 0 00:21:03.192 [2024-07-15 10:33:57.788133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.192 [2024-07-15 10:33:57.788148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.192 [2024-07-15 10:33:57.788155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788161] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7840) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.788178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247540) 00:21:03.192 [2024-07-15 10:33:57.788204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.192 [2024-07-15 10:33:57.788225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7840, cid 3, qid 0 00:21:03.192 [2024-07-15 10:33:57.788334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.192 [2024-07-15 10:33:57.788345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.192 [2024-07-15 10:33:57.788352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7840) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.788375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788384] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247540) 00:21:03.192 [2024-07-15 10:33:57.788401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.192 [2024-07-15 10:33:57.788421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7840, cid 3, qid 0 00:21:03.192 [2024-07-15 10:33:57.788532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.192 [2024-07-15 10:33:57.788544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.192 [2024-07-15 10:33:57.788550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7840) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.788573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247540) 00:21:03.192 [2024-07-15 10:33:57.788599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.192 [2024-07-15 10:33:57.788619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7840, cid 3, qid 0 00:21:03.192 [2024-07-15 10:33:57.788732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.192 [2024-07-15 10:33:57.788747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.192 [2024-07-15 10:33:57.788753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7840) on tqpair=0x2247540 00:21:03.192 [2024-07-15 10:33:57.788777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.192 [2024-07-15 10:33:57.788793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247540) 00:21:03.192 [2024-07-15 10:33:57.788807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.192 [2024-07-15 10:33:57.788829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7840, cid 3, qid 0 00:21:03.192 [2024-07-15 10:33:57.792889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.193 [2024-07-15 10:33:57.792905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.193 [2024-07-15 10:33:57.792912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.193 [2024-07-15 10:33:57.792919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7840) on tqpair=0x2247540 00:21:03.193 [2024-07-15 10:33:57.792951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.193 [2024-07-15 10:33:57.792961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.193 [2024-07-15 10:33:57.792968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2247540) 00:21:03.193 [2024-07-15 10:33:57.792979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.193 [2024-07-15 10:33:57.793001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22a7840, cid 3, qid 0 00:21:03.193 [2024-07-15 10:33:57.793149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.193 [2024-07-15 10:33:57.793165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.193 [2024-07-15 10:33:57.793171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.193 [2024-07-15 10:33:57.793178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22a7840) on tqpair=0x2247540 00:21:03.193 [2024-07-15 10:33:57.793193] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:21:03.193 00:21:03.193 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:03.193 [2024-07-15 10:33:57.827830] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:03.193 [2024-07-15 10:33:57.827892] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365900 ] 00:21:03.456 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.456 [2024-07-15 10:33:57.862743] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:03.456 [2024-07-15 10:33:57.862797] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:03.456 [2024-07-15 10:33:57.862807] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:03.456 [2024-07-15 10:33:57.862821] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:03.456 [2024-07-15 10:33:57.862830] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:03.456 [2024-07-15 10:33:57.863075] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:03.456 [2024-07-15 10:33:57.863115] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1dd0540 0 00:21:03.456 [2024-07-15 10:33:57.876887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:03.456 [2024-07-15 10:33:57.876907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:03.456 [2024-07-15 10:33:57.876914] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:03.456 [2024-07-15 10:33:57.876920] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:03.456 [2024-07-15 10:33:57.876973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.876985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.876992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:21:03.456 [2024-07-15 10:33:57.877005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:03.456 [2024-07-15 10:33:57.877031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:21:03.456 [2024-07-15 10:33:57.884891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.456 [2024-07-15 10:33:57.884908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.456 [2024-07-15 10:33:57.884915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.884922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:21:03.456 [2024-07-15 10:33:57.884935] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:03.456 [2024-07-15 10:33:57.884945] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:03.456 [2024-07-15 10:33:57.884954] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:03.456 [2024-07-15 10:33:57.884971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.884979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.884986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:21:03.456 [2024-07-15 10:33:57.884996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.456 [2024-07-15 10:33:57.885019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:21:03.456 [2024-07-15 10:33:57.885175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.456 [2024-07-15 10:33:57.885188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.456 [2024-07-15 10:33:57.885195] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.885201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:21:03.456 [2024-07-15 10:33:57.885210] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:03.456 [2024-07-15 10:33:57.885222] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:03.456 [2024-07-15 10:33:57.885234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.885242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.885248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:21:03.456 [2024-07-15 10:33:57.885259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.456 [2024-07-15 10:33:57.885280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:21:03.456 [2024-07-15 10:33:57.885398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.456 [2024-07-15 10:33:57.885413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.456 [2024-07-15 10:33:57.885420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.885426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:21:03.456 [2024-07-15 10:33:57.885435] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:03.456 [2024-07-15 10:33:57.885449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:03.456 [2024-07-15 10:33:57.885461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.885472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.885479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:21:03.456 [2024-07-15 10:33:57.885490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.456 [2024-07-15 10:33:57.885511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:21:03.456 [2024-07-15 10:33:57.885623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.456 [2024-07-15 10:33:57.885638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.456 [2024-07-15 10:33:57.885644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.456 [2024-07-15 10:33:57.885651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:21:03.456 [2024-07-15 10:33:57.885660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:03.457 [2024-07-15 10:33:57.885677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.885686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.885692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:21:03.457 [2024-07-15 10:33:57.885703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.457 [2024-07-15 10:33:57.885724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:21:03.457 [2024-07-15 10:33:57.885834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.457 [2024-07-15 10:33:57.885847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.457 [2024-07-15 10:33:57.885853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.885860] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:21:03.457 [2024-07-15 10:33:57.885867] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:03.457 [2024-07-15 10:33:57.885882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:03.457 [2024-07-15 10:33:57.885896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:03.457 [2024-07-15 10:33:57.886009] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:03.457 [2024-07-15 10:33:57.886017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:03.457 [2024-07-15 10:33:57.886029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:21:03.457 [2024-07-15 10:33:57.886052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.457 [2024-07-15 10:33:57.886073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:21:03.457 [2024-07-15 10:33:57.886229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.457 [2024-07-15 10:33:57.886241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.457 [2024-07-15 10:33:57.886248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:21:03.457 [2024-07-15 10:33:57.886263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:03.457 [2024-07-15 10:33:57.886283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:21:03.457 [2024-07-15 10:33:57.886310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.457 [2024-07-15 10:33:57.886331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:21:03.457 [2024-07-15 10:33:57.886443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.457 [2024-07-15 10:33:57.886459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.457 [2024-07-15 10:33:57.886465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:21:03.457 [2024-07-15 10:33:57.886480] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:03.457 [2024-07-15 10:33:57.886488] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:03.457 [2024-07-15 10:33:57.886502] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:03.457 [2024-07-15 10:33:57.886515] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:03.457 [2024-07-15 10:33:57.886529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:21:03.457 [2024-07-15 10:33:57.886548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.457 [2024-07-15 10:33:57.886569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:21:03.457 [2024-07-15 10:33:57.886728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.457 [2024-07-15 10:33:57.886741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.457 [2024-07-15 10:33:57.886747] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886754] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=4096, cccid=0 00:21:03.457 [2024-07-15 10:33:57.886761] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e303c0) on tqpair(0x1dd0540): expected_datao=0, payload_size=4096 00:21:03.457 [2024-07-15 10:33:57.886769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886779] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886786] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.457 [2024-07-15 10:33:57.886820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.457 [2024-07-15 10:33:57.886826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:21:03.457 [2024-07-15 10:33:57.886844] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:03.457 [2024-07-15 10:33:57.886856] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:03.457 [2024-07-15 10:33:57.886865] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:03.457 [2024-07-15 10:33:57.886871] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:03.457 [2024-07-15 10:33:57.886888] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:03.457 [2024-07-15 10:33:57.886900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:03.457 [2024-07-15 10:33:57.886915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:03.457 [2024-07-15 10:33:57.886927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.886941] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:21:03.457 [2024-07-15 10:33:57.886952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:03.457 [2024-07-15 10:33:57.886973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:21:03.457 [2024-07-15 10:33:57.887094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.457 [2024-07-15 10:33:57.887106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.457 [2024-07-15 10:33:57.887113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:21:03.457 [2024-07-15 10:33:57.887130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd0540) 00:21:03.457 [2024-07-15 10:33:57.887153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.457 [2024-07-15 10:33:57.887163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1dd0540) 00:21:03.457 [2024-07-15 10:33:57.887185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.457 [2024-07-15 10:33:57.887194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1dd0540) 00:21:03.457 [2024-07-15 10:33:57.887216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.457 [2024-07-15 10:33:57.887225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:21:03.457 [2024-07-15 10:33:57.887247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.457 [2024-07-15 10:33:57.887256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:03.457 [2024-07-15 10:33:57.887288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:03.457 [2024-07-15 10:33:57.887301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:21:03.457 [2024-07-15 10:33:57.887318] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.457 [2024-07-15 10:33:57.887339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e303c0, cid 0, qid 0 00:21:03.457 [2024-07-15 10:33:57.887368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30540, cid 1, qid 0 00:21:03.457 [2024-07-15 10:33:57.887376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e306c0, cid 2, qid 0 00:21:03.457 [2024-07-15 10:33:57.887384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:21:03.457 [2024-07-15 10:33:57.887391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:21:03.457 [2024-07-15 10:33:57.887560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.457 [2024-07-15 10:33:57.887572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.457 [2024-07-15 10:33:57.887579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:21:03.457 [2024-07-15 10:33:57.887593] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:03.457 [2024-07-15 10:33:57.887602] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:03.457 [2024-07-15 10:33:57.887616] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:03.457 [2024-07-15 10:33:57.887629] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:03.457 [2024-07-15 10:33:57.887639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.457 [2024-07-15 10:33:57.887647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.887668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:21:03.458 [2024-07-15 10:33:57.887679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:03.458 [2024-07-15 10:33:57.887699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:21:03.458 [2024-07-15 10:33:57.887871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.458 [2024-07-15 10:33:57.887894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.458 [2024-07-15 10:33:57.887901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.887908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:21:03.458 [2024-07-15 10:33:57.887974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.887994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.888009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:21:03.458 [2024-07-15 10:33:57.888027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.458 [2024-07-15 10:33:57.888065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:21:03.458 [2024-07-15 10:33:57.888256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.458 [2024-07-15 10:33:57.888272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.458 [2024-07-15 10:33:57.888279] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888285] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=4096, cccid=4 00:21:03.458 [2024-07-15 10:33:57.888293] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e309c0) on tqpair(0x1dd0540): expected_datao=0, payload_size=4096 00:21:03.458 [2024-07-15 10:33:57.888304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888322] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888331] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.458 [2024-07-15 10:33:57.888414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.458 [2024-07-15 10:33:57.888420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888427] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:21:03.458 [2024-07-15 10:33:57.888445] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:03.458 [2024-07-15 10:33:57.888468] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.888486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.888500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:21:03.458 [2024-07-15 10:33:57.888518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.458 [2024-07-15 10:33:57.888540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:21:03.458 [2024-07-15 10:33:57.888681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.458 [2024-07-15 10:33:57.888697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.458 [2024-07-15 10:33:57.888703] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888710] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=4096, cccid=4 00:21:03.458 [2024-07-15 10:33:57.888717] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e309c0) on tqpair(0x1dd0540): expected_datao=0, payload_size=4096 00:21:03.458 [2024-07-15 10:33:57.888725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888742] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888750] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.458 [2024-07-15 10:33:57.888833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.458 [2024-07-15 10:33:57.888840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.888846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:21:03.458 [2024-07-15 10:33:57.888871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.892901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.892917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.892925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:21:03.458 [2024-07-15 10:33:57.892936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.458 [2024-07-15 10:33:57.892957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:21:03.458 [2024-07-15 10:33:57.893121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.458 [2024-07-15 10:33:57.893133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.458 [2024-07-15 10:33:57.893140] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.893150] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=4096, cccid=4 00:21:03.458 [2024-07-15 10:33:57.893158] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e309c0) on tqpair(0x1dd0540): expected_datao=0, payload_size=4096 00:21:03.458 [2024-07-15 10:33:57.893165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.893182] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.893190] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.893253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.458 [2024-07-15 10:33:57.893268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.458 [2024-07-15 10:33:57.893274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.893281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:21:03.458 [2024-07-15 10:33:57.893296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.893311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.893328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.893340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.893349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.893358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.893367] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:03.458 [2024-07-15 10:33:57.893375] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:03.458 [2024-07-15 10:33:57.893383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:03.458 [2024-07-15 10:33:57.893401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.893410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:21:03.458 [2024-07-15 10:33:57.893421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.458 [2024-07-15 10:33:57.893432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.893439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.893460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dd0540) 00:21:03.458 [2024-07-15 10:33:57.893470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:03.458 [2024-07-15 10:33:57.893495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:21:03.458 [2024-07-15 10:33:57.893506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30b40, cid 5, qid 0 00:21:03.458 [2024-07-15 10:33:57.893677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.458 [2024-07-15 10:33:57.893689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.458 [2024-07-15 10:33:57.893696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.893703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:21:03.458 [2024-07-15 10:33:57.893712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.458 [2024-07-15 10:33:57.893725] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.458 [2024-07-15 10:33:57.893732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.458 [2024-07-15 10:33:57.893738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30b40) on tqpair=0x1dd0540 00:21:03.458 [2024-07-15 10:33:57.893754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.893763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dd0540) 00:21:03.459 [2024-07-15 10:33:57.893773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.459 [2024-07-15 10:33:57.893794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30b40, cid 5, qid 0 00:21:03.459 [2024-07-15 10:33:57.893928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.459 [2024-07-15 10:33:57.893943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.459 [2024-07-15 10:33:57.893950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.893957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30b40) on tqpair=0x1dd0540 00:21:03.459 [2024-07-15 10:33:57.893973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.893982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dd0540) 00:21:03.459 [2024-07-15 10:33:57.893992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.459 [2024-07-15 10:33:57.894013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30b40, cid 5, qid 0 00:21:03.459 [2024-07-15 10:33:57.894132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.459 [2024-07-15 10:33:57.894144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.459 [2024-07-15 10:33:57.894151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30b40) on tqpair=0x1dd0540 00:21:03.459 [2024-07-15 10:33:57.894173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dd0540) 00:21:03.459 [2024-07-15 10:33:57.894193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.459 [2024-07-15 10:33:57.894212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30b40, cid 5, qid 0 00:21:03.459 [2024-07-15 10:33:57.894323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.459 [2024-07-15 10:33:57.894335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.459 [2024-07-15 10:33:57.894342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30b40) on tqpair=0x1dd0540 00:21:03.459 [2024-07-15 10:33:57.894396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dd0540) 00:21:03.459 [2024-07-15 10:33:57.894418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.459 [2024-07-15 10:33:57.894430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd0540) 00:21:03.459 [2024-07-15 10:33:57.894446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.459 [2024-07-15 10:33:57.894458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1dd0540) 00:21:03.459 [2024-07-15 10:33:57.894478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.459 [2024-07-15 10:33:57.894505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1dd0540) 00:21:03.459 [2024-07-15 10:33:57.894522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.459 [2024-07-15 10:33:57.894543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30b40, cid 5, qid 0 00:21:03.459 [2024-07-15 10:33:57.894553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e309c0, cid 4, qid 0 00:21:03.459 [2024-07-15 10:33:57.894576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30cc0, cid 6, qid 0 00:21:03.459 [2024-07-15 10:33:57.894584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30e40, cid 7, qid 0 00:21:03.459 [2024-07-15 10:33:57.894812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.459 [2024-07-15 10:33:57.894828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.459 [2024-07-15 10:33:57.894835] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894841] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=8192, cccid=5 00:21:03.459 [2024-07-15 10:33:57.894849] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e30b40) on tqpair(0x1dd0540): expected_datao=0, payload_size=8192 00:21:03.459 [2024-07-15 10:33:57.894856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894907] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894919] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.459 [2024-07-15 10:33:57.894937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.459 [2024-07-15 10:33:57.894943] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894950] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=512, cccid=4 00:21:03.459 [2024-07-15 10:33:57.894957] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e309c0) on tqpair(0x1dd0540): expected_datao=0, payload_size=512 00:21:03.459 [2024-07-15 10:33:57.894964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894974] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894980] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.894988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.459 [2024-07-15 10:33:57.894997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.459 [2024-07-15 10:33:57.895004] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895010] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=512, cccid=6 00:21:03.459 [2024-07-15 10:33:57.895017] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e30cc0) on tqpair(0x1dd0540): expected_datao=0, payload_size=512 00:21:03.459 [2024-07-15 10:33:57.895024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895034] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895040] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:03.459 [2024-07-15 10:33:57.895057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:03.459 [2024-07-15 10:33:57.895064] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895070] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd0540): datao=0, datal=4096, cccid=7 00:21:03.459 [2024-07-15 10:33:57.895081] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e30e40) on tqpair(0x1dd0540): expected_datao=0, payload_size=4096 00:21:03.459 [2024-07-15 10:33:57.895089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895099] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895105] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.459 [2024-07-15 10:33:57.895126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.459 [2024-07-15 10:33:57.895133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30b40) on tqpair=0x1dd0540 00:21:03.459 [2024-07-15 10:33:57.895158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.459 [2024-07-15 10:33:57.895169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.459 [2024-07-15 10:33:57.895175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e309c0) on tqpair=0x1dd0540 00:21:03.459 [2024-07-15 10:33:57.895212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.459 [2024-07-15 10:33:57.895222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.459 [2024-07-15 10:33:57.895228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30cc0) on tqpair=0x1dd0540 00:21:03.459 [2024-07-15 10:33:57.895245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.459 [2024-07-15 10:33:57.895254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.459 [2024-07-15 10:33:57.895260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.459 [2024-07-15 10:33:57.895266] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30e40) on tqpair=0x1dd0540 00:21:03.459 ===================================================== 00:21:03.459 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.459 ===================================================== 00:21:03.459 Controller Capabilities/Features 00:21:03.459 ================================ 00:21:03.459 Vendor ID: 8086 00:21:03.459 Subsystem Vendor ID: 8086 00:21:03.459 Serial Number: SPDK00000000000001 00:21:03.459 Model Number: SPDK bdev Controller 00:21:03.459 Firmware Version: 24.09 00:21:03.459 Recommended Arb Burst: 6 00:21:03.459 IEEE OUI Identifier: e4 d2 5c 00:21:03.459 Multi-path I/O 00:21:03.459 May have multiple subsystem ports: Yes 00:21:03.459 May have multiple controllers: Yes 00:21:03.459 Associated with SR-IOV VF: No 00:21:03.459 Max Data Transfer Size: 131072 00:21:03.459 Max Number of Namespaces: 32 00:21:03.459 Max Number of I/O Queues: 127 00:21:03.459 NVMe Specification Version (VS): 1.3 00:21:03.459 NVMe Specification Version (Identify): 1.3 00:21:03.459 Maximum Queue Entries: 128 00:21:03.459 Contiguous Queues Required: Yes 00:21:03.459 Arbitration Mechanisms Supported 00:21:03.459 Weighted Round Robin: Not Supported 00:21:03.459 Vendor Specific: Not Supported 00:21:03.459 Reset Timeout: 15000 ms 00:21:03.459 Doorbell Stride: 4 bytes 00:21:03.459 NVM Subsystem Reset: Not Supported 00:21:03.459 Command Sets Supported 00:21:03.459 NVM Command Set: Supported 00:21:03.459 Boot Partition: Not Supported 00:21:03.459 Memory Page Size Minimum: 4096 bytes 00:21:03.459 Memory Page Size Maximum: 4096 bytes 00:21:03.459 Persistent Memory Region: Not Supported 00:21:03.459 Optional Asynchronous Events Supported 00:21:03.459 Namespace Attribute Notices: Supported 00:21:03.459 Firmware Activation Notices: Not Supported 00:21:03.459 ANA Change Notices: Not Supported 00:21:03.459 PLE Aggregate Log Change Notices: Not Supported 00:21:03.459 LBA Status Info Alert Notices: Not Supported 00:21:03.459 EGE Aggregate Log Change Notices: Not Supported 00:21:03.459 Normal NVM Subsystem Shutdown event: Not Supported 00:21:03.460 Zone Descriptor Change Notices: Not Supported 00:21:03.460 Discovery Log Change Notices: Not Supported 00:21:03.460 Controller Attributes 00:21:03.460 128-bit Host Identifier: Supported 00:21:03.460 Non-Operational Permissive Mode: Not Supported 00:21:03.460 NVM Sets: Not Supported 00:21:03.460 Read Recovery Levels: Not Supported 00:21:03.460 Endurance Groups: Not Supported 00:21:03.460 Predictable Latency Mode: Not Supported 00:21:03.460 Traffic Based Keep ALive: Not Supported 00:21:03.460 Namespace Granularity: Not Supported 00:21:03.460 SQ Associations: Not Supported 00:21:03.460 UUID List: Not Supported 00:21:03.460 Multi-Domain Subsystem: Not Supported 00:21:03.460 Fixed Capacity Management: Not Supported 00:21:03.460 Variable Capacity Management: Not Supported 00:21:03.460 Delete Endurance Group: Not Supported 00:21:03.460 Delete NVM Set: Not Supported 00:21:03.460 Extended LBA Formats Supported: Not Supported 00:21:03.460 Flexible Data Placement Supported: Not Supported 00:21:03.460 00:21:03.460 Controller Memory Buffer Support 00:21:03.460 ================================ 00:21:03.460 Supported: No 00:21:03.460 00:21:03.460 Persistent Memory Region Support 00:21:03.460 ================================ 00:21:03.460 Supported: No 00:21:03.460 00:21:03.460 Admin Command Set Attributes 00:21:03.460 ============================ 00:21:03.460 Security Send/Receive: Not Supported 00:21:03.460 Format NVM: Not Supported 00:21:03.460 Firmware Activate/Download: Not Supported 00:21:03.460 Namespace Management: Not Supported 00:21:03.460 Device Self-Test: Not Supported 00:21:03.460 Directives: Not Supported 00:21:03.460 NVMe-MI: Not Supported 00:21:03.460 Virtualization Management: Not Supported 00:21:03.460 Doorbell Buffer Config: Not Supported 00:21:03.460 Get LBA Status Capability: Not Supported 00:21:03.460 Command & Feature Lockdown Capability: Not Supported 00:21:03.460 Abort Command Limit: 4 00:21:03.460 Async Event Request Limit: 4 00:21:03.460 Number of Firmware Slots: N/A 00:21:03.460 Firmware Slot 1 Read-Only: N/A 00:21:03.460 Firmware Activation Without Reset: N/A 00:21:03.460 Multiple Update Detection Support: N/A 00:21:03.460 Firmware Update Granularity: No Information Provided 00:21:03.460 Per-Namespace SMART Log: No 00:21:03.460 Asymmetric Namespace Access Log Page: Not Supported 00:21:03.460 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:03.460 Command Effects Log Page: Supported 00:21:03.460 Get Log Page Extended Data: Supported 00:21:03.460 Telemetry Log Pages: Not Supported 00:21:03.460 Persistent Event Log Pages: Not Supported 00:21:03.460 Supported Log Pages Log Page: May Support 00:21:03.460 Commands Supported & Effects Log Page: Not Supported 00:21:03.460 Feature Identifiers & Effects Log Page:May Support 00:21:03.460 NVMe-MI Commands & Effects Log Page: May Support 00:21:03.460 Data Area 4 for Telemetry Log: Not Supported 00:21:03.460 Error Log Page Entries Supported: 128 00:21:03.460 Keep Alive: Supported 00:21:03.460 Keep Alive Granularity: 10000 ms 00:21:03.460 00:21:03.460 NVM Command Set Attributes 00:21:03.460 ========================== 00:21:03.460 Submission Queue Entry Size 00:21:03.460 Max: 64 00:21:03.460 Min: 64 00:21:03.460 Completion Queue Entry Size 00:21:03.460 Max: 16 00:21:03.460 Min: 16 00:21:03.460 Number of Namespaces: 32 00:21:03.460 Compare Command: Supported 00:21:03.460 Write Uncorrectable Command: Not Supported 00:21:03.460 Dataset Management Command: Supported 00:21:03.460 Write Zeroes Command: Supported 00:21:03.460 Set Features Save Field: Not Supported 00:21:03.460 Reservations: Supported 00:21:03.460 Timestamp: Not Supported 00:21:03.460 Copy: Supported 00:21:03.460 Volatile Write Cache: Present 00:21:03.460 Atomic Write Unit (Normal): 1 00:21:03.460 Atomic Write Unit (PFail): 1 00:21:03.460 Atomic Compare & Write Unit: 1 00:21:03.460 Fused Compare & Write: Supported 00:21:03.460 Scatter-Gather List 00:21:03.460 SGL Command Set: Supported 00:21:03.460 SGL Keyed: Supported 00:21:03.460 SGL Bit Bucket Descriptor: Not Supported 00:21:03.460 SGL Metadata Pointer: Not Supported 00:21:03.460 Oversized SGL: Not Supported 00:21:03.460 SGL Metadata Address: Not Supported 00:21:03.460 SGL Offset: Supported 00:21:03.460 Transport SGL Data Block: Not Supported 00:21:03.460 Replay Protected Memory Block: Not Supported 00:21:03.460 00:21:03.460 Firmware Slot Information 00:21:03.460 ========================= 00:21:03.460 Active slot: 1 00:21:03.460 Slot 1 Firmware Revision: 24.09 00:21:03.460 00:21:03.460 00:21:03.460 Commands Supported and Effects 00:21:03.460 ============================== 00:21:03.460 Admin Commands 00:21:03.460 -------------- 00:21:03.460 Get Log Page (02h): Supported 00:21:03.460 Identify (06h): Supported 00:21:03.460 Abort (08h): Supported 00:21:03.460 Set Features (09h): Supported 00:21:03.460 Get Features (0Ah): Supported 00:21:03.460 Asynchronous Event Request (0Ch): Supported 00:21:03.460 Keep Alive (18h): Supported 00:21:03.460 I/O Commands 00:21:03.460 ------------ 00:21:03.460 Flush (00h): Supported LBA-Change 00:21:03.460 Write (01h): Supported LBA-Change 00:21:03.460 Read (02h): Supported 00:21:03.460 Compare (05h): Supported 00:21:03.460 Write Zeroes (08h): Supported LBA-Change 00:21:03.460 Dataset Management (09h): Supported LBA-Change 00:21:03.460 Copy (19h): Supported LBA-Change 00:21:03.460 00:21:03.460 Error Log 00:21:03.460 ========= 00:21:03.460 00:21:03.460 Arbitration 00:21:03.460 =========== 00:21:03.460 Arbitration Burst: 1 00:21:03.460 00:21:03.460 Power Management 00:21:03.460 ================ 00:21:03.460 Number of Power States: 1 00:21:03.460 Current Power State: Power State #0 00:21:03.460 Power State #0: 00:21:03.460 Max Power: 0.00 W 00:21:03.460 Non-Operational State: Operational 00:21:03.460 Entry Latency: Not Reported 00:21:03.460 Exit Latency: Not Reported 00:21:03.460 Relative Read Throughput: 0 00:21:03.460 Relative Read Latency: 0 00:21:03.460 Relative Write Throughput: 0 00:21:03.460 Relative Write Latency: 0 00:21:03.460 Idle Power: Not Reported 00:21:03.460 Active Power: Not Reported 00:21:03.460 Non-Operational Permissive Mode: Not Supported 00:21:03.460 00:21:03.460 Health Information 00:21:03.460 ================== 00:21:03.460 Critical Warnings: 00:21:03.460 Available Spare Space: OK 00:21:03.460 Temperature: OK 00:21:03.460 Device Reliability: OK 00:21:03.460 Read Only: No 00:21:03.460 Volatile Memory Backup: OK 00:21:03.460 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:03.460 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:03.460 Available Spare: 0% 00:21:03.460 Available Spare Threshold: 0% 00:21:03.460 Life Percentage Used:[2024-07-15 10:33:57.895411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.460 [2024-07-15 10:33:57.895423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1dd0540) 00:21:03.460 [2024-07-15 10:33:57.895434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.460 [2024-07-15 10:33:57.895455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30e40, cid 7, qid 0 00:21:03.460 [2024-07-15 10:33:57.895627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.460 [2024-07-15 10:33:57.895639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.460 [2024-07-15 10:33:57.895645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.460 [2024-07-15 10:33:57.895652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30e40) on tqpair=0x1dd0540 00:21:03.460 [2024-07-15 10:33:57.895699] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:03.460 [2024-07-15 10:33:57.895718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e303c0) on tqpair=0x1dd0540 00:21:03.460 [2024-07-15 10:33:57.895728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.460 [2024-07-15 10:33:57.895737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30540) on tqpair=0x1dd0540 00:21:03.460 [2024-07-15 10:33:57.895745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.460 [2024-07-15 10:33:57.895753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e306c0) on tqpair=0x1dd0540 00:21:03.460 [2024-07-15 10:33:57.895760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.460 [2024-07-15 10:33:57.895787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:21:03.461 [2024-07-15 10:33:57.895796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:03.461 [2024-07-15 10:33:57.895808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.895816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.895822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:21:03.461 [2024-07-15 10:33:57.895832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.461 [2024-07-15 10:33:57.895853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:21:03.461 [2024-07-15 10:33:57.896007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.461 [2024-07-15 10:33:57.896023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.461 [2024-07-15 10:33:57.896029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:21:03.461 [2024-07-15 10:33:57.896047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:21:03.461 [2024-07-15 10:33:57.896072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.461 [2024-07-15 10:33:57.896098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:21:03.461 [2024-07-15 10:33:57.896225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.461 [2024-07-15 10:33:57.896237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.461 [2024-07-15 10:33:57.896243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:21:03.461 [2024-07-15 10:33:57.896257] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:03.461 [2024-07-15 10:33:57.896265] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:03.461 [2024-07-15 10:33:57.896281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:21:03.461 [2024-07-15 10:33:57.896305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.461 [2024-07-15 10:33:57.896326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:21:03.461 [2024-07-15 10:33:57.896447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.461 [2024-07-15 10:33:57.896462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.461 [2024-07-15 10:33:57.896469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:21:03.461 [2024-07-15 10:33:57.896492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:21:03.461 [2024-07-15 10:33:57.896517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.461 [2024-07-15 10:33:57.896538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:21:03.461 [2024-07-15 10:33:57.896647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.461 [2024-07-15 10:33:57.896659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.461 [2024-07-15 10:33:57.896666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:21:03.461 [2024-07-15 10:33:57.896688] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:21:03.461 [2024-07-15 10:33:57.896714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.461 [2024-07-15 10:33:57.896734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:21:03.461 [2024-07-15 10:33:57.896845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.461 [2024-07-15 10:33:57.896860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.461 [2024-07-15 10:33:57.896867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.896873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:21:03.461 [2024-07-15 10:33:57.900904] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.900915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:03.461 [2024-07-15 10:33:57.900921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd0540) 00:21:03.461 [2024-07-15 10:33:57.900932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.462 [2024-07-15 10:33:57.900953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e30840, cid 3, qid 0 00:21:03.462 [2024-07-15 10:33:57.901109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:03.462 [2024-07-15 10:33:57.901125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:03.462 [2024-07-15 10:33:57.901132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:03.462 [2024-07-15 10:33:57.901138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e30840) on tqpair=0x1dd0540 00:21:03.462 [2024-07-15 10:33:57.901152] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:21:03.462 0% 00:21:03.462 Data Units Read: 0 00:21:03.462 Data Units Written: 0 00:21:03.462 Host Read Commands: 0 00:21:03.462 Host Write Commands: 0 00:21:03.462 Controller Busy Time: 0 minutes 00:21:03.462 Power Cycles: 0 00:21:03.462 Power On Hours: 0 hours 00:21:03.462 Unsafe Shutdowns: 0 00:21:03.462 Unrecoverable Media Errors: 0 00:21:03.462 Lifetime Error Log Entries: 0 00:21:03.462 Warning Temperature Time: 0 minutes 00:21:03.462 Critical Temperature Time: 0 minutes 00:21:03.462 00:21:03.462 Number of Queues 00:21:03.462 ================ 00:21:03.462 Number of I/O Submission Queues: 127 00:21:03.462 Number of I/O Completion Queues: 127 00:21:03.462 00:21:03.462 Active Namespaces 00:21:03.462 ================= 00:21:03.462 Namespace ID:1 00:21:03.462 Error Recovery Timeout: Unlimited 00:21:03.462 Command Set Identifier: NVM (00h) 00:21:03.462 Deallocate: Supported 00:21:03.462 Deallocated/Unwritten Error: Not Supported 00:21:03.462 Deallocated Read Value: Unknown 00:21:03.462 Deallocate in Write Zeroes: Not Supported 00:21:03.462 Deallocated Guard Field: 0xFFFF 00:21:03.462 Flush: Supported 00:21:03.462 Reservation: Supported 00:21:03.462 Namespace Sharing Capabilities: Multiple Controllers 00:21:03.462 Size (in LBAs): 131072 (0GiB) 00:21:03.462 Capacity (in LBAs): 131072 (0GiB) 00:21:03.462 Utilization (in LBAs): 131072 (0GiB) 00:21:03.462 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:03.462 EUI64: ABCDEF0123456789 00:21:03.462 UUID: cb0ef4d3-98e1-4a06-9b77-476d004af7e9 00:21:03.462 Thin Provisioning: Not Supported 00:21:03.462 Per-NS Atomic Units: Yes 00:21:03.462 Atomic Boundary Size (Normal): 0 00:21:03.462 Atomic Boundary Size (PFail): 0 00:21:03.462 Atomic Boundary Offset: 0 00:21:03.462 Maximum Single Source Range Length: 65535 00:21:03.462 Maximum Copy Length: 65535 00:21:03.462 Maximum Source Range Count: 1 00:21:03.462 NGUID/EUI64 Never Reused: No 00:21:03.462 Namespace Write Protected: No 00:21:03.462 Number of LBA Formats: 1 00:21:03.462 Current LBA Format: LBA Format #00 00:21:03.462 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:03.462 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:03.462 rmmod nvme_tcp 00:21:03.462 rmmod nvme_fabrics 00:21:03.462 rmmod nvme_keyring 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2365757 ']' 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2365757 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2365757 ']' 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2365757 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.462 10:33:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2365757 00:21:03.462 10:33:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:03.462 10:33:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:03.462 10:33:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2365757' 00:21:03.462 killing process with pid 2365757 00:21:03.462 10:33:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2365757 00:21:03.462 10:33:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2365757 00:21:03.720 10:33:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:03.720 10:33:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:03.720 10:33:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:03.720 10:33:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.720 10:33:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:03.720 10:33:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.721 10:33:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.721 10:33:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.314 10:34:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:06.314 00:21:06.314 real 0m5.409s 00:21:06.314 user 0m4.414s 00:21:06.314 sys 0m1.834s 00:21:06.314 10:34:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:06.314 10:34:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:06.314 ************************************ 00:21:06.314 END TEST nvmf_identify 00:21:06.314 ************************************ 00:21:06.314 10:34:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:06.314 10:34:00 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:06.314 10:34:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:06.314 10:34:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.314 10:34:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:06.314 ************************************ 00:21:06.314 START TEST nvmf_perf 00:21:06.314 ************************************ 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:06.314 * Looking for test storage... 00:21:06.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.314 10:34:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:08.221 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:08.221 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:08.221 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:08.221 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.221 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:08.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:21:08.221 00:21:08.221 --- 10.0.0.2 ping statistics --- 00:21:08.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.222 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:21:08.222 00:21:08.222 --- 10.0.0.1 ping statistics --- 00:21:08.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.222 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2367828 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2367828 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2367828 ']' 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.222 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:08.222 [2024-07-15 10:34:02.576838] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:08.222 [2024-07-15 10:34:02.576932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.222 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.222 [2024-07-15 10:34:02.640039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.222 [2024-07-15 10:34:02.750328] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.222 [2024-07-15 10:34:02.750377] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.222 [2024-07-15 10:34:02.750404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.222 [2024-07-15 10:34:02.750415] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.222 [2024-07-15 10:34:02.750424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.222 [2024-07-15 10:34:02.750504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.222 [2024-07-15 10:34:02.750568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.222 [2024-07-15 10:34:02.750635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.222 [2024-07-15 10:34:02.750638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.480 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.480 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:21:08.480 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.480 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.480 10:34:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:08.480 10:34:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.480 10:34:02 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:08.480 10:34:02 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:11.767 10:34:05 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:11.767 10:34:05 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:11.767 10:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:21:11.767 10:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:12.024 10:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:12.024 10:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:21:12.024 10:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:12.024 10:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:12.024 10:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:12.282 [2024-07-15 10:34:06.751296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.282 10:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.540 10:34:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:12.540 10:34:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.798 10:34:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:12.798 10:34:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:13.056 10:34:07 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.313 [2024-07-15 10:34:07.741716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.313 10:34:07 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:13.573 10:34:08 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:21:13.573 10:34:08 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:13.573 10:34:08 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:13.573 10:34:08 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:14.947 Initializing NVMe Controllers 00:21:14.947 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:14.947 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:14.947 Initialization complete. Launching workers. 00:21:14.947 ======================================================== 00:21:14.947 Latency(us) 00:21:14.947 Device Information : IOPS MiB/s Average min max 00:21:14.947 PCIE (0000:88:00.0) NSID 1 from core 0: 84114.57 328.57 379.72 35.02 5248.75 00:21:14.947 ======================================================== 00:21:14.947 Total : 84114.57 328.57 379.72 35.02 5248.75 00:21:14.947 00:21:14.947 10:34:09 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:14.947 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.323 Initializing NVMe Controllers 00:21:16.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:16.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:16.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:16.323 Initialization complete. Launching workers. 00:21:16.323 ======================================================== 00:21:16.323 Latency(us) 00:21:16.323 Device Information : IOPS MiB/s Average min max 00:21:16.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 89.00 0.35 11668.64 177.54 45785.90 00:21:16.323 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 85.00 0.33 11911.16 5094.83 51890.77 00:21:16.323 ======================================================== 00:21:16.323 Total : 174.00 0.68 11787.11 177.54 51890.77 00:21:16.323 00:21:16.323 10:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:16.323 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.694 Initializing NVMe Controllers 00:21:17.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:17.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:17.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:17.694 Initialization complete. Launching workers. 00:21:17.694 ======================================================== 00:21:17.694 Latency(us) 00:21:17.694 Device Information : IOPS MiB/s Average min max 00:21:17.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8425.11 32.91 3802.81 438.97 10455.70 00:21:17.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3851.82 15.05 8319.50 6910.19 16146.77 00:21:17.694 ======================================================== 00:21:17.694 Total : 12276.94 47.96 5219.90 438.97 16146.77 00:21:17.694 00:21:17.694 10:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:17.694 10:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:17.694 10:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:17.694 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.222 Initializing NVMe Controllers 00:21:20.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:20.222 Controller IO queue size 128, less than required. 00:21:20.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.222 Controller IO queue size 128, less than required. 00:21:20.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:20.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:20.222 Initialization complete. Launching workers. 00:21:20.222 ======================================================== 00:21:20.222 Latency(us) 00:21:20.222 Device Information : IOPS MiB/s Average min max 00:21:20.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1192.99 298.25 109945.91 71357.63 146691.02 00:21:20.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.99 151.75 224328.35 77287.85 351911.03 00:21:20.222 ======================================================== 00:21:20.222 Total : 1799.98 450.00 148518.21 71357.63 351911.03 00:21:20.222 00:21:20.222 10:34:14 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:20.222 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.481 No valid NVMe controllers or AIO or URING devices found 00:21:20.481 Initializing NVMe Controllers 00:21:20.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.481 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:20.481 Controller IO queue size 128, less than required. 00:21:20.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:20.482 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:20.482 WARNING: Some requested NVMe devices were skipped 00:21:20.482 10:34:14 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:20.482 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.047 Initializing NVMe Controllers 00:21:23.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.047 Controller IO queue size 128, less than required. 00:21:23.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.047 Controller IO queue size 128, less than required. 00:21:23.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:23.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:23.047 Initialization complete. Launching workers. 00:21:23.047 00:21:23.047 ==================== 00:21:23.047 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:23.047 TCP transport: 00:21:23.047 polls: 19264 00:21:23.047 idle_polls: 6845 00:21:23.047 sock_completions: 12419 00:21:23.047 nvme_completions: 5069 00:21:23.047 submitted_requests: 7582 00:21:23.047 queued_requests: 1 00:21:23.047 00:21:23.047 ==================== 00:21:23.047 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:23.047 TCP transport: 00:21:23.047 polls: 19219 00:21:23.047 idle_polls: 6850 00:21:23.047 sock_completions: 12369 00:21:23.047 nvme_completions: 4963 00:21:23.047 submitted_requests: 7432 00:21:23.047 queued_requests: 1 00:21:23.047 ======================================================== 00:21:23.047 Latency(us) 00:21:23.047 Device Information : IOPS MiB/s Average min max 00:21:23.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1266.45 316.61 105508.05 54112.30 189180.09 00:21:23.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1239.96 309.99 105294.24 49927.94 149075.40 00:21:23.047 ======================================================== 00:21:23.047 Total : 2506.40 626.60 105402.27 49927.94 189180.09 00:21:23.047 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.047 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.047 rmmod nvme_tcp 00:21:23.306 rmmod nvme_fabrics 00:21:23.306 rmmod nvme_keyring 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2367828 ']' 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2367828 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2367828 ']' 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2367828 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2367828 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2367828' 00:21:23.306 killing process with pid 2367828 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2367828 00:21:23.306 10:34:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2367828 00:21:25.210 10:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.210 10:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.210 10:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.210 10:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.210 10:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.210 10:34:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.210 10:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.210 10:34:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.117 10:34:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:27.117 00:21:27.117 real 0m21.062s 00:21:27.117 user 1m4.711s 00:21:27.117 sys 0m4.986s 00:21:27.117 10:34:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:27.117 10:34:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:27.117 ************************************ 00:21:27.117 END TEST nvmf_perf 00:21:27.117 ************************************ 00:21:27.117 10:34:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:27.117 10:34:21 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:27.117 10:34:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:27.117 10:34:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.117 10:34:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:27.117 ************************************ 00:21:27.117 START TEST nvmf_fio_host 00:21:27.117 ************************************ 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:27.117 * Looking for test storage... 00:21:27.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.117 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.118 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.118 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.118 10:34:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.118 10:34:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.118 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.118 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.118 10:34:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.118 10:34:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:29.022 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:29.022 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:29.022 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:29.022 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:29.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:21:29.022 00:21:29.022 --- 10.0.0.2 ping statistics --- 00:21:29.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.022 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:29.022 00:21:29.022 --- 10.0.0.1 ping statistics --- 00:21:29.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.022 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.022 10:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.023 10:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2371793 00:21:29.023 10:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:29.023 10:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.023 10:34:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2371793 00:21:29.023 10:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2371793 ']' 00:21:29.023 10:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.023 10:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.023 10:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.023 10:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.023 10:34:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.282 [2024-07-15 10:34:23.709752] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:29.282 [2024-07-15 10:34:23.709836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.282 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.282 [2024-07-15 10:34:23.777771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.282 [2024-07-15 10:34:23.895216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.282 [2024-07-15 10:34:23.895269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.282 [2024-07-15 10:34:23.895283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.282 [2024-07-15 10:34:23.895294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.282 [2024-07-15 10:34:23.895304] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.282 [2024-07-15 10:34:23.895361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.282 [2024-07-15 10:34:23.895419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.282 [2024-07-15 10:34:23.895489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.282 [2024-07-15 10:34:23.895491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.540 10:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.540 10:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:29.540 10:34:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:29.798 [2024-07-15 10:34:24.302691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.798 10:34:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:29.798 10:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.798 10:34:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.798 10:34:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:30.056 Malloc1 00:21:30.056 10:34:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:30.314 10:34:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:30.572 10:34:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:30.830 [2024-07-15 10:34:25.438662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.830 10:34:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:31.088 10:34:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:31.088 10:34:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:31.088 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:31.088 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:31.088 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:31.088 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:31.088 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:31.088 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:31.088 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:31.089 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.089 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:31.089 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:31.089 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:31.089 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:31.089 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:31.089 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.089 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:31.089 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:31.089 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:31.346 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:31.346 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:31.346 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:31.346 10:34:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:31.346 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:31.346 fio-3.35 00:21:31.346 Starting 1 thread 00:21:31.346 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.868 00:21:33.868 test: (groupid=0, jobs=1): err= 0: pid=2372159: Mon Jul 15 10:34:28 2024 00:21:33.868 read: IOPS=9228, BW=36.0MiB/s (37.8MB/s)(72.3MiB/2006msec) 00:21:33.868 slat (usec): min=2, max=169, avg= 2.63, stdev= 1.88 00:21:33.868 clat (usec): min=3280, max=12947, avg=7668.51, stdev=567.43 00:21:33.869 lat (usec): min=3308, max=12949, avg=7671.14, stdev=567.33 00:21:33.869 clat percentiles (usec): 00:21:33.869 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:21:33.869 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:21:33.869 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:21:33.869 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[11207], 99.95th=[12125], 00:21:33.869 | 99.99th=[12911] 00:21:33.869 bw ( KiB/s): min=36112, max=37456, per=99.92%, avg=36886.00, stdev=561.65, samples=4 00:21:33.869 iops : min= 9028, max= 9364, avg=9221.50, stdev=140.41, samples=4 00:21:33.869 write: IOPS=9233, BW=36.1MiB/s (37.8MB/s)(72.4MiB/2006msec); 0 zone resets 00:21:33.869 slat (usec): min=2, max=157, avg= 2.78, stdev= 1.57 00:21:33.869 clat (usec): min=1436, max=11893, avg=6164.23, stdev=496.46 00:21:33.869 lat (usec): min=1444, max=11896, avg=6167.01, stdev=496.41 00:21:33.869 clat percentiles (usec): 00:21:33.869 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:21:33.869 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6259], 00:21:33.869 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:21:33.869 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 9372], 99.95th=[11076], 00:21:33.869 | 99.99th=[11863] 00:21:33.869 bw ( KiB/s): min=36544, max=37096, per=99.99%, avg=36932.00, stdev=260.83, samples=4 00:21:33.869 iops : min= 9136, max= 9274, avg=9233.00, stdev=65.21, samples=4 00:21:33.869 lat (msec) : 2=0.02%, 4=0.12%, 10=99.72%, 20=0.14% 00:21:33.869 cpu : usr=59.40%, sys=35.91%, ctx=69, majf=0, minf=41 00:21:33.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:33.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.869 issued rwts: total=18513,18523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.869 00:21:33.869 Run status group 0 (all jobs): 00:21:33.869 READ: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=72.3MiB (75.8MB), run=2006-2006msec 00:21:33.869 WRITE: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=72.4MiB (75.9MB), run=2006-2006msec 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:33.869 10:34:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:34.127 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:34.127 fio-3.35 00:21:34.127 Starting 1 thread 00:21:34.127 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.649 00:21:36.649 test: (groupid=0, jobs=1): err= 0: pid=2372610: Mon Jul 15 10:34:31 2024 00:21:36.649 read: IOPS=8189, BW=128MiB/s (134MB/s)(257MiB/2007msec) 00:21:36.649 slat (nsec): min=2882, max=96739, avg=3666.06, stdev=1573.11 00:21:36.649 clat (usec): min=1968, max=52585, avg=9123.05, stdev=3499.06 00:21:36.649 lat (usec): min=1972, max=52589, avg=9126.72, stdev=3499.06 00:21:36.649 clat percentiles (usec): 00:21:36.649 | 1.00th=[ 4686], 5.00th=[ 5538], 10.00th=[ 6325], 20.00th=[ 7242], 00:21:36.649 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:21:36.649 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12256], 00:21:36.649 | 99.00th=[14746], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:21:36.649 | 99.99th=[52167] 00:21:36.649 bw ( KiB/s): min=59520, max=76928, per=51.97%, avg=68104.00, stdev=7872.36, samples=4 00:21:36.649 iops : min= 3720, max= 4808, avg=4256.50, stdev=492.02, samples=4 00:21:36.649 write: IOPS=4780, BW=74.7MiB/s (78.3MB/s)(140MiB/1869msec); 0 zone resets 00:21:36.649 slat (usec): min=30, max=131, avg=33.28, stdev= 4.26 00:21:36.649 clat (usec): min=6563, max=56425, avg=11159.47, stdev=3573.94 00:21:36.649 lat (usec): min=6596, max=56463, avg=11192.75, stdev=3573.79 00:21:36.649 clat percentiles (usec): 00:21:36.649 | 1.00th=[ 7373], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9241], 00:21:36.649 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:21:36.649 | 70.00th=[11863], 80.00th=[12649], 90.00th=[13698], 95.00th=[14615], 00:21:36.649 | 99.00th=[16581], 99.50th=[18220], 99.90th=[55837], 99.95th=[56361], 00:21:36.649 | 99.99th=[56361] 00:21:36.649 bw ( KiB/s): min=60960, max=80768, per=92.85%, avg=71016.00, stdev=8834.60, samples=4 00:21:36.649 iops : min= 3810, max= 5048, avg=4438.50, stdev=552.16, samples=4 00:21:36.649 lat (msec) : 2=0.01%, 4=0.11%, 10=57.73%, 20=41.65%, 50=0.30% 00:21:36.649 lat (msec) : 100=0.20% 00:21:36.649 cpu : usr=75.62%, sys=21.14%, ctx=38, majf=0, minf=61 00:21:36.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:36.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:36.649 issued rwts: total=16437,8934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:36.649 00:21:36.649 Run status group 0 (all jobs): 00:21:36.649 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (269MB), run=2007-2007msec 00:21:36.649 WRITE: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=140MiB (146MB), run=1869-1869msec 00:21:36.649 10:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.907 rmmod nvme_tcp 00:21:36.907 rmmod nvme_fabrics 00:21:36.907 rmmod nvme_keyring 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2371793 ']' 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2371793 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2371793 ']' 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2371793 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2371793 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2371793' 00:21:36.907 killing process with pid 2371793 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2371793 00:21:36.907 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2371793 00:21:37.165 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:37.165 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:37.165 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:37.165 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.165 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:37.165 10:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.165 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.165 10:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.701 10:34:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:39.701 00:21:39.701 real 0m12.239s 00:21:39.701 user 0m36.489s 00:21:39.701 sys 0m3.989s 00:21:39.701 10:34:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:39.701 10:34:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.701 ************************************ 00:21:39.701 END TEST nvmf_fio_host 00:21:39.701 ************************************ 00:21:39.701 10:34:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:39.701 10:34:33 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:39.701 10:34:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:39.701 10:34:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.701 10:34:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:39.701 ************************************ 00:21:39.701 START TEST nvmf_failover 00:21:39.701 ************************************ 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:39.701 * Looking for test storage... 00:21:39.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:39.701 10:34:33 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.702 10:34:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:41.147 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:41.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:41.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.441 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:41.442 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:41.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:41.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:21:41.442 00:21:41.442 --- 10.0.0.2 ping statistics --- 00:21:41.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.442 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:21:41.442 00:21:41.442 --- 10.0.0.1 ping statistics --- 00:21:41.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.442 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2374797 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2374797 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2374797 ']' 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.442 10:34:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:41.442 [2024-07-15 10:34:36.005283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:41.442 [2024-07-15 10:34:36.005371] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.442 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.442 [2024-07-15 10:34:36.077033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.701 [2024-07-15 10:34:36.194769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.701 [2024-07-15 10:34:36.194830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.701 [2024-07-15 10:34:36.194845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.701 [2024-07-15 10:34:36.194858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.701 [2024-07-15 10:34:36.194869] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.701 [2024-07-15 10:34:36.194985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.701 [2024-07-15 10:34:36.195106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.701 [2024-07-15 10:34:36.195110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.634 10:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.635 10:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:42.635 10:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.635 10:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.635 10:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:42.635 10:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.635 10:34:36 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:42.635 [2024-07-15 10:34:37.216543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.635 10:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:42.965 Malloc0 00:21:42.965 10:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.223 10:34:37 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:43.481 10:34:38 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.739 [2024-07-15 10:34:38.257814] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.739 10:34:38 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.997 [2024-07-15 10:34:38.510565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.997 10:34:38 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:44.255 [2024-07-15 10:34:38.755247] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:44.255 10:34:38 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2375102 00:21:44.255 10:34:38 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:44.255 10:34:38 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.255 10:34:38 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2375102 /var/tmp/bdevperf.sock 00:21:44.255 10:34:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2375102 ']' 00:21:44.255 10:34:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.255 10:34:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.255 10:34:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.255 10:34:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.255 10:34:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.514 10:34:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.514 10:34:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:44.514 10:34:39 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.082 NVMe0n1 00:21:45.082 10:34:39 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.651 00:21:45.651 10:34:40 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2375237 00:21:45.651 10:34:40 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.651 10:34:40 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:46.588 10:34:41 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.847 [2024-07-15 10:34:41.322018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 [2024-07-15 10:34:41.322556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x716070 is same with the state(5) to be set 00:21:46.847 10:34:41 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:50.133 10:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.133 00:21:50.133 10:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:50.392 [2024-07-15 10:34:45.017396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717640 is same with the state(5) to be set 00:21:50.392 [2024-07-15 10:34:45.017481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717640 is same with the state(5) to be set 00:21:50.392 [2024-07-15 10:34:45.017511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717640 is same with the state(5) to be set 00:21:50.392 [2024-07-15 10:34:45.017523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717640 is same with the state(5) to be set 00:21:50.392 [2024-07-15 10:34:45.017534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717640 is same with the state(5) to be set 00:21:50.392 [2024-07-15 10:34:45.017545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717640 is same with the state(5) to be set 00:21:50.392 [2024-07-15 10:34:45.017556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717640 is same with the state(5) to be set 00:21:50.392 10:34:45 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:53.677 10:34:48 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.677 [2024-07-15 10:34:48.318352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.940 10:34:48 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:54.873 10:34:49 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:55.133 [2024-07-15 10:34:49.570500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717e70 is same with the state(5) to be set 00:21:55.133 [2024-07-15 10:34:49.570577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717e70 is same with the state(5) to be set 00:21:55.133 [2024-07-15 10:34:49.570604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717e70 is same with the state(5) to be set 00:21:55.133 [2024-07-15 10:34:49.570616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717e70 is same with the state(5) to be set 00:21:55.133 [2024-07-15 10:34:49.570628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717e70 is same with the state(5) to be set 00:21:55.133 [2024-07-15 10:34:49.570641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717e70 is same with the state(5) to be set 00:21:55.133 [2024-07-15 10:34:49.570653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x717e70 is same with the state(5) to be set 00:21:55.133 10:34:49 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2375237 00:22:01.767 0 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2375102 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2375102 ']' 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2375102 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2375102 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2375102' 00:22:01.767 killing process with pid 2375102 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2375102 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2375102 00:22:01.767 10:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:01.767 [2024-07-15 10:34:38.818832] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:01.767 [2024-07-15 10:34:38.818950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375102 ] 00:22:01.767 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.767 [2024-07-15 10:34:38.878442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.767 [2024-07-15 10:34:38.986491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.767 Running I/O for 15 seconds... 00:22:01.767 [2024-07-15 10:34:41.323952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.323996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.324046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.324078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.324111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.324143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.324181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.324209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.324249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.324278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.324307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.767 [2024-07-15 10:34:41.324337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.767 [2024-07-15 10:34:41.324360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.324973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.324986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.325001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.325015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.325030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.325044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.325059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.325072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.325087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.325101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.325120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.325135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.325150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.325163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.768 [2024-07-15 10:34:41.325178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.768 [2024-07-15 10:34:41.325192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.325977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.325990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.326005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.326019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.326034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.326047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.326062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.326076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.326091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.769 [2024-07-15 10:34:41.326104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.769 [2024-07-15 10:34:41.326119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.770 [2024-07-15 10:34:41.326626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.770 [2024-07-15 10:34:41.326681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81304 len:8 PRP1 0x0 PRP2 0x0 00:22:01.770 [2024-07-15 10:34:41.326695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.770 [2024-07-15 10:34:41.326727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.770 [2024-07-15 10:34:41.326738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81312 len:8 PRP1 0x0 PRP2 0x0 00:22:01.770 [2024-07-15 10:34:41.326752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.770 [2024-07-15 10:34:41.326777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.770 [2024-07-15 10:34:41.326789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81320 len:8 PRP1 0x0 PRP2 0x0 00:22:01.770 [2024-07-15 10:34:41.326809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.770 [2024-07-15 10:34:41.326836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.770 [2024-07-15 10:34:41.326848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81328 len:8 PRP1 0x0 PRP2 0x0 00:22:01.770 [2024-07-15 10:34:41.326860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.770 [2024-07-15 10:34:41.326902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.770 [2024-07-15 10:34:41.326914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81336 len:8 PRP1 0x0 PRP2 0x0 00:22:01.770 [2024-07-15 10:34:41.326931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.770 [2024-07-15 10:34:41.326956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.770 [2024-07-15 10:34:41.326967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81344 len:8 PRP1 0x0 PRP2 0x0 00:22:01.770 [2024-07-15 10:34:41.326980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.770 [2024-07-15 10:34:41.326993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81352 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81360 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81368 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81376 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81384 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81392 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81400 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81408 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81416 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81424 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81432 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81440 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81448 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81456 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81464 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81472 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81480 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81488 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.771 [2024-07-15 10:34:41.327938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81496 len:8 PRP1 0x0 PRP2 0x0 00:22:01.771 [2024-07-15 10:34:41.327951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.771 [2024-07-15 10:34:41.327964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.771 [2024-07-15 10:34:41.327975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.327991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81504 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81512 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81520 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81528 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81536 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81544 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81552 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81560 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81568 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81576 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81584 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81592 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81600 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81608 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.772 [2024-07-15 10:34:41.328734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.772 [2024-07-15 10:34:41.328745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:22:01.772 [2024-07-15 10:34:41.328759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328824] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ce3390 was disconnected and freed. reset controller. 00:22:01.772 [2024-07-15 10:34:41.328844] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:01.772 [2024-07-15 10:34:41.328894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.772 [2024-07-15 10:34:41.328914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.772 [2024-07-15 10:34:41.328929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.772 [2024-07-15 10:34:41.328943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:41.328957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.773 [2024-07-15 10:34:41.328972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:41.328986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.773 [2024-07-15 10:34:41.328999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:41.329013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.773 [2024-07-15 10:34:41.332295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.773 [2024-07-15 10:34:41.332335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbd0f0 (9): Bad file descriptor 00:22:01.773 [2024-07-15 10:34:41.457440] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:01.773 [2024-07-15 10:34:45.019657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.019703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.019734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.019749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.019776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.019791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.019808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.019821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.019835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.019849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.019865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.019901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.019919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.019934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.019950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.019964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.019979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.019992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.020020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.020049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.020077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.020107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.773 [2024-07-15 10:34:45.020144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.773 [2024-07-15 10:34:45.020178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.773 [2024-07-15 10:34:45.020223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.773 [2024-07-15 10:34:45.020250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.773 [2024-07-15 10:34:45.020278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.773 [2024-07-15 10:34:45.020307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.773 [2024-07-15 10:34:45.020336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.773 [2024-07-15 10:34:45.020351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.773 [2024-07-15 10:34:45.020365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.020977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.020991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.774 [2024-07-15 10:34:45.021383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.774 [2024-07-15 10:34:45.021397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.775 [2024-07-15 10:34:45.021409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.775 [2024-07-15 10:34:45.021437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.775 [2024-07-15 10:34:45.021465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.775 [2024-07-15 10:34:45.021493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.775 [2024-07-15 10:34:45.021521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.775 [2024-07-15 10:34:45.021548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.775 [2024-07-15 10:34:45.021576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.021626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105688 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.021639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.021669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.021680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105696 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.021692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.021720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.021731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105704 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.021745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.021768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.021781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105712 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.021794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.021817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.021828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105720 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.021840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.021889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.021902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105728 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.021915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.021940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.021952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105736 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.021964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.021978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.021989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.021999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105744 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.022011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.022025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.022036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.022047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105752 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.022060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.022073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.022083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.022095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105760 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.022111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.022125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.022136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.022148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105768 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.022176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.022190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.022201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.022212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105776 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.022226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.022238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.022250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.022261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105784 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.022273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.022286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.022296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.022307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105792 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.022319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.022332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.022342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.022353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105800 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.022365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.022377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.022388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.022399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105808 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.022411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.022424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.022434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.775 [2024-07-15 10:34:45.022445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105816 len:8 PRP1 0x0 PRP2 0x0 00:22:01.775 [2024-07-15 10:34:45.022456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.775 [2024-07-15 10:34:45.022469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.775 [2024-07-15 10:34:45.022480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105824 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.022506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.022519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.022530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105832 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.022553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.022566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.022577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105840 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.022601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.022614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.022624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105848 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.022647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.022660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.022670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105856 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.022693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.022706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.022717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105864 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.022739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.022752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.022762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105872 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.022786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.022798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.022809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105880 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.022832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.022844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.022858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105888 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.022903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.022919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.022931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105896 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.022955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.022968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.022979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.022991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105904 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.023004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.023018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.023029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.023040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105912 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.023052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.023066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.023077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.023088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105920 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.023100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.023113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.023124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.023136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105928 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.023148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.023162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.023173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.023184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105936 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.023211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.023225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.023235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.023246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105944 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.023258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.023274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.023284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.023295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105952 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.023307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.023320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.023330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.023341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105960 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.023353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.023367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.023377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.023388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105968 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.023399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.023412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.023422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.023433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105976 len:8 PRP1 0x0 PRP2 0x0 00:22:01.776 [2024-07-15 10:34:45.023446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.776 [2024-07-15 10:34:45.023459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.776 [2024-07-15 10:34:45.023469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.776 [2024-07-15 10:34:45.023479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105984 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.023504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.023515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.023525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105992 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.023549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.023560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.023570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106000 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.023595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.023606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.023619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106008 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.023645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.023656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.023667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106016 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.023691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.023702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.023713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106024 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.023749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.023760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.023770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106032 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.023795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.023806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.023817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106040 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.023841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.023851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.023862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106048 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.023912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.023923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.023934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106056 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.023960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.023971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.023982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106064 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.023995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.024008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.024022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.024034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106072 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.024047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.024060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.024070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.024081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106080 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.024094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.024108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.024119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.024130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106088 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.024148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.024162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.024173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.024199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106096 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.024211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.024224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.024234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.024245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106104 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.024257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.024270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.024280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.024291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106112 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.039394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.039427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.039440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.039453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106120 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.039467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.039481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.039492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.039503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106128 len:8 PRP1 0x0 PRP2 0x0 00:22:01.777 [2024-07-15 10:34:45.039517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.777 [2024-07-15 10:34:45.039552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.777 [2024-07-15 10:34:45.039563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.777 [2024-07-15 10:34:45.039576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106136 len:8 PRP1 0x0 PRP2 0x0 00:22:01.778 [2024-07-15 10:34:45.039588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.039617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.778 [2024-07-15 10:34:45.039628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.778 [2024-07-15 10:34:45.039639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106144 len:8 PRP1 0x0 PRP2 0x0 00:22:01.778 [2024-07-15 10:34:45.039651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.039663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.778 [2024-07-15 10:34:45.039675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.778 [2024-07-15 10:34:45.039686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106152 len:8 PRP1 0x0 PRP2 0x0 00:22:01.778 [2024-07-15 10:34:45.039700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.039713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.778 [2024-07-15 10:34:45.039725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.778 [2024-07-15 10:34:45.039735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106160 len:8 PRP1 0x0 PRP2 0x0 00:22:01.778 [2024-07-15 10:34:45.039748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.039761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.778 [2024-07-15 10:34:45.039772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.778 [2024-07-15 10:34:45.039783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106168 len:8 PRP1 0x0 PRP2 0x0 00:22:01.778 [2024-07-15 10:34:45.039795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.039809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.778 [2024-07-15 10:34:45.039834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.778 [2024-07-15 10:34:45.039846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106176 len:8 PRP1 0x0 PRP2 0x0 00:22:01.778 [2024-07-15 10:34:45.039858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.039872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.778 [2024-07-15 10:34:45.039917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.778 [2024-07-15 10:34:45.039938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106184 len:8 PRP1 0x0 PRP2 0x0 00:22:01.778 [2024-07-15 10:34:45.039952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.039966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.778 [2024-07-15 10:34:45.039978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.778 [2024-07-15 10:34:45.039989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106192 len:8 PRP1 0x0 PRP2 0x0 00:22:01.778 [2024-07-15 10:34:45.040009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.040023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.778 [2024-07-15 10:34:45.040034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.778 [2024-07-15 10:34:45.040046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106200 len:8 PRP1 0x0 PRP2 0x0 00:22:01.778 [2024-07-15 10:34:45.040058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.040072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.778 [2024-07-15 10:34:45.040084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.778 [2024-07-15 10:34:45.040095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106208 len:8 PRP1 0x0 PRP2 0x0 00:22:01.778 [2024-07-15 10:34:45.040108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.040176] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e87b70 was disconnected and freed. reset controller. 00:22:01.778 [2024-07-15 10:34:45.040196] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:01.778 [2024-07-15 10:34:45.040250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.778 [2024-07-15 10:34:45.040281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.040312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.778 [2024-07-15 10:34:45.040326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.040340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.778 [2024-07-15 10:34:45.040354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.040367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.778 [2024-07-15 10:34:45.040381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:45.040395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.778 [2024-07-15 10:34:45.040454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbd0f0 (9): Bad file descriptor 00:22:01.778 [2024-07-15 10:34:45.043865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.778 [2024-07-15 10:34:45.116268] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:01.778 [2024-07-15 10:34:49.571548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.778 [2024-07-15 10:34:49.571588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:49.571627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.778 [2024-07-15 10:34:49.571643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:49.571660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.778 [2024-07-15 10:34:49.571679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:49.571695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.778 [2024-07-15 10:34:49.571708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.778 [2024-07-15 10:34:49.571724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.778 [2024-07-15 10:34:49.571738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.571753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.571766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.571780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.571793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.571807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.571820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.571834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.571846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.571861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.571874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.571912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.571927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.571942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.571955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.571971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.571985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.779 [2024-07-15 10:34:49.572696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.779 [2024-07-15 10:34:49.572710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.780 [2024-07-15 10:34:49.572723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.572739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.572752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.572771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.572784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.572799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.572813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.572827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.572841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.572855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.572868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.572905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.572920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.572936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.572949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.572965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.572978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.572993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.780 [2024-07-15 10:34:49.573691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.780 [2024-07-15 10:34:49.573705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.573720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.573733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.573747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.573761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.573776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.573789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.573804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.573817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.573832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.573845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.573860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.573873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.573915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.573931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.573945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.573959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.573974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.573988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.574017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.574046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.574075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.574104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.574133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:01.781 [2024-07-15 10:34:49.574162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43224 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43232 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43240 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43248 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43256 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43264 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43272 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43280 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43288 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43296 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43304 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.781 [2024-07-15 10:34:49.574766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43312 len:8 PRP1 0x0 PRP2 0x0 00:22:01.781 [2024-07-15 10:34:49.574778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.781 [2024-07-15 10:34:49.574791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.781 [2024-07-15 10:34:49.574802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.574812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43320 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.574824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.574836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.574847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.574873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43328 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.574897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.574911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.574923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.574933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43336 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.574946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.574959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.574970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.574981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43344 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.574994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43352 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43360 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43368 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43376 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43384 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43392 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43400 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43408 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43416 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43424 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43432 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43440 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43448 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43456 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43464 len:8 PRP1 0x0 PRP2 0x0 00:22:01.782 [2024-07-15 10:34:49.575734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.782 [2024-07-15 10:34:49.575746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.782 [2024-07-15 10:34:49.575757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.782 [2024-07-15 10:34:49.575768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43472 len:8 PRP1 0x0 PRP2 0x0 00:22:01.783 [2024-07-15 10:34:49.575781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.575799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.783 [2024-07-15 10:34:49.575810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.783 [2024-07-15 10:34:49.575821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43480 len:8 PRP1 0x0 PRP2 0x0 00:22:01.783 [2024-07-15 10:34:49.575834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.575847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.783 [2024-07-15 10:34:49.575885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.783 [2024-07-15 10:34:49.575899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43488 len:8 PRP1 0x0 PRP2 0x0 00:22:01.783 [2024-07-15 10:34:49.575912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.575927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.783 [2024-07-15 10:34:49.575938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.783 [2024-07-15 10:34:49.575950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43496 len:8 PRP1 0x0 PRP2 0x0 00:22:01.783 [2024-07-15 10:34:49.575962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.575975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.783 [2024-07-15 10:34:49.575987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.783 [2024-07-15 10:34:49.575998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43504 len:8 PRP1 0x0 PRP2 0x0 00:22:01.783 [2024-07-15 10:34:49.576011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.576024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.783 [2024-07-15 10:34:49.576035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.783 [2024-07-15 10:34:49.576046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43512 len:8 PRP1 0x0 PRP2 0x0 00:22:01.783 [2024-07-15 10:34:49.576059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.576073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.783 [2024-07-15 10:34:49.576086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.783 [2024-07-15 10:34:49.576097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43520 len:8 PRP1 0x0 PRP2 0x0 00:22:01.783 [2024-07-15 10:34:49.576109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.576122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.783 [2024-07-15 10:34:49.576134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.783 [2024-07-15 10:34:49.576146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43528 len:8 PRP1 0x0 PRP2 0x0 00:22:01.783 [2024-07-15 10:34:49.576174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.576188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.783 [2024-07-15 10:34:49.576198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.783 [2024-07-15 10:34:49.576210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42824 len:8 PRP1 0x0 PRP2 0x0 00:22:01.783 [2024-07-15 10:34:49.576222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.576241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:01.783 [2024-07-15 10:34:49.576252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:01.783 [2024-07-15 10:34:49.576264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42832 len:8 PRP1 0x0 PRP2 0x0 00:22:01.783 [2024-07-15 10:34:49.576276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.576348] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ceccc0 was disconnected and freed. reset controller. 00:22:01.783 [2024-07-15 10:34:49.576368] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:01.783 [2024-07-15 10:34:49.576418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.783 [2024-07-15 10:34:49.576438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.576454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.783 [2024-07-15 10:34:49.576468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.576481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.783 [2024-07-15 10:34:49.576495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.576509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.783 [2024-07-15 10:34:49.576523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.783 [2024-07-15 10:34:49.576537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.783 [2024-07-15 10:34:49.576580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbd0f0 (9): Bad file descriptor 00:22:01.783 [2024-07-15 10:34:49.579872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.783 [2024-07-15 10:34:49.609270] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:01.783 00:22:01.783 Latency(us) 00:22:01.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.783 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:01.783 Verification LBA range: start 0x0 length 0x4000 00:22:01.783 NVMe0n1 : 15.02 8563.34 33.45 578.34 0.00 13973.92 813.13 31457.28 00:22:01.783 =================================================================================================================== 00:22:01.783 Total : 8563.34 33.45 578.34 0.00 13973.92 813.13 31457.28 00:22:01.783 Received shutdown signal, test time was about 15.000000 seconds 00:22:01.783 00:22:01.783 Latency(us) 00:22:01.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.783 =================================================================================================================== 00:22:01.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.783 10:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:01.783 10:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:01.783 10:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:01.783 10:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:01.783 10:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2377078 00:22:01.783 10:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2377078 /var/tmp/bdevperf.sock 00:22:01.783 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2377078 ']' 00:22:01.783 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.783 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.783 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.784 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.784 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:01.784 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.784 10:34:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:01.784 10:34:55 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:01.784 [2024-07-15 10:34:56.089315] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:01.784 10:34:56 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:01.784 [2024-07-15 10:34:56.337998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:01.784 10:34:56 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.353 NVMe0n1 00:22:02.353 10:34:56 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.610 00:22:02.610 10:34:57 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.867 00:22:02.867 10:34:57 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:02.867 10:34:57 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:03.124 10:34:57 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:03.382 10:34:57 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:06.669 10:35:00 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:06.669 10:35:00 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:06.669 10:35:01 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2377746 00:22:06.669 10:35:01 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:06.669 10:35:01 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2377746 00:22:08.046 0 00:22:08.046 10:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:08.046 [2024-07-15 10:34:55.574642] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:08.046 [2024-07-15 10:34:55.574741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377078 ] 00:22:08.046 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.046 [2024-07-15 10:34:55.634011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.046 [2024-07-15 10:34:55.738772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.046 [2024-07-15 10:34:57.918195] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:08.046 [2024-07-15 10:34:57.918327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.046 [2024-07-15 10:34:57.918351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.046 [2024-07-15 10:34:57.918372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.046 [2024-07-15 10:34:57.918385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.046 [2024-07-15 10:34:57.918399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.046 [2024-07-15 10:34:57.918413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.046 [2024-07-15 10:34:57.918428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.046 [2024-07-15 10:34:57.918443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.046 [2024-07-15 10:34:57.918458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:08.046 [2024-07-15 10:34:57.918515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:08.046 [2024-07-15 10:34:57.918554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24600f0 (9): Bad file descriptor 00:22:08.046 [2024-07-15 10:34:58.092091] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:08.046 Running I/O for 1 seconds... 00:22:08.046 00:22:08.046 Latency(us) 00:22:08.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.046 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:08.046 Verification LBA range: start 0x0 length 0x4000 00:22:08.046 NVMe0n1 : 1.01 8320.91 32.50 0.00 0.00 15314.21 3082.62 14854.83 00:22:08.046 =================================================================================================================== 00:22:08.046 Total : 8320.91 32.50 0.00 0.00 15314.21 3082.62 14854.83 00:22:08.046 10:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.046 10:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:08.046 10:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.304 10:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.304 10:35:02 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:08.562 10:35:03 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.819 10:35:03 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2377078 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2377078 ']' 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2377078 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2377078 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2377078' 00:22:12.104 killing process with pid 2377078 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2377078 00:22:12.104 10:35:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2377078 00:22:12.362 10:35:06 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:12.362 10:35:06 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:12.620 10:35:07 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:12.620 10:35:07 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:12.620 10:35:07 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:12.620 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.620 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.621 rmmod nvme_tcp 00:22:12.621 rmmod nvme_fabrics 00:22:12.621 rmmod nvme_keyring 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2374797 ']' 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2374797 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2374797 ']' 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2374797 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2374797 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2374797' 00:22:12.621 killing process with pid 2374797 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2374797 00:22:12.621 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2374797 00:22:13.191 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:13.191 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:13.191 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:13.191 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.191 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:13.191 10:35:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.191 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.191 10:35:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.096 10:35:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:15.096 00:22:15.096 real 0m35.793s 00:22:15.096 user 2m6.387s 00:22:15.096 sys 0m5.804s 00:22:15.096 10:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.096 10:35:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:15.096 ************************************ 00:22:15.096 END TEST nvmf_failover 00:22:15.096 ************************************ 00:22:15.096 10:35:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:15.096 10:35:09 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:15.096 10:35:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:15.096 10:35:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.096 10:35:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:15.096 ************************************ 00:22:15.096 START TEST nvmf_host_discovery 00:22:15.096 ************************************ 00:22:15.096 10:35:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:15.096 * Looking for test storage... 00:22:15.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:15.096 10:35:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.096 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:15.096 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.096 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.097 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.355 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.355 10:35:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.355 10:35:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.355 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.355 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.355 10:35:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.355 10:35:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:17.259 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:17.260 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:17.260 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:17.260 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:17.260 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:22:17.260 00:22:17.260 --- 10.0.0.2 ping statistics --- 00:22:17.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.260 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:22:17.260 00:22:17.260 --- 10.0.0.1 ping statistics --- 00:22:17.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.260 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2380485 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2380485 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2380485 ']' 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.260 10:35:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.548 [2024-07-15 10:35:11.930837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:17.548 [2024-07-15 10:35:11.930936] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.548 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.548 [2024-07-15 10:35:11.993070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.548 [2024-07-15 10:35:12.098223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.548 [2024-07-15 10:35:12.098289] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.548 [2024-07-15 10:35:12.098302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.548 [2024-07-15 10:35:12.098313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.548 [2024-07-15 10:35:12.098323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.548 [2024-07-15 10:35:12.098358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.807 [2024-07-15 10:35:12.233620] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.807 [2024-07-15 10:35:12.241794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.807 null0 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.807 null1 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2380507 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2380507 /tmp/host.sock 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2380507 ']' 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:17.807 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.807 10:35:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.807 [2024-07-15 10:35:12.314207] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:17.807 [2024-07-15 10:35:12.314286] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380507 ] 00:22:17.807 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.807 [2024-07-15 10:35:12.375275] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.067 [2024-07-15 10:35:12.492463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:19.003 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.004 [2024-07-15 10:35:13.613552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:19.004 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:22:19.261 10:35:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:19.827 [2024-07-15 10:35:14.364103] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:19.827 [2024-07-15 10:35:14.364127] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:19.828 [2024-07-15 10:35:14.364150] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:19.828 [2024-07-15 10:35:14.450468] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:20.087 [2024-07-15 10:35:14.515506] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:20.087 [2024-07-15 10:35:14.515534] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:20.346 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.347 10:35:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.605 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:20.606 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.864 [2024-07-15 10:35:15.298647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:20.864 [2024-07-15 10:35:15.299530] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:20.864 [2024-07-15 10:35:15.299566] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.864 [2024-07-15 10:35:15.385305] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:20.864 10:35:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:20.864 [2024-07-15 10:35:15.447855] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:20.864 [2024-07-15 10:35:15.447895] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:20.864 [2024-07-15 10:35:15.447908] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:21.797 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:21.797 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:21.797 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:21.797 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:21.797 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:21.797 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.797 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:21.797 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.797 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:21.797 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.057 [2024-07-15 10:35:16.522384] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:22.057 [2024-07-15 10:35:16.522419] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:22.057 [2024-07-15 10:35:16.530634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.057 [2024-07-15 10:35:16.530665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.057 [2024-07-15 10:35:16.530705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.057 [2024-07-15 10:35:16.530721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.057 [2024-07-15 10:35:16.530736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.057 [2024-07-15 10:35:16.530751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.057 [2024-07-15 10:35:16.530765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.057 [2024-07-15 10:35:16.530779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.057 [2024-07-15 10:35:16.530793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2c00 is same with the state(5) to be set 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.057 [2024-07-15 10:35:16.540629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb2c00 (9): Bad file descriptor 00:22:22.057 [2024-07-15 10:35:16.550674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:22.057 [2024-07-15 10:35:16.550921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.057 [2024-07-15 10:35:16.550967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb2c00 with addr=10.0.0.2, port=4420 00:22:22.057 [2024-07-15 10:35:16.550985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2c00 is same with the state(5) to be set 00:22:22.057 [2024-07-15 10:35:16.551008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb2c00 (9): Bad file descriptor 00:22:22.057 [2024-07-15 10:35:16.551043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:22.057 [2024-07-15 10:35:16.551062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:22.057 [2024-07-15 10:35:16.551077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:22.057 [2024-07-15 10:35:16.551112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:22.057 [2024-07-15 10:35:16.560758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:22.057 [2024-07-15 10:35:16.560987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.057 [2024-07-15 10:35:16.561021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb2c00 with addr=10.0.0.2, port=4420 00:22:22.057 [2024-07-15 10:35:16.561038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2c00 is same with the state(5) to be set 00:22:22.057 [2024-07-15 10:35:16.561061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb2c00 (9): Bad file descriptor 00:22:22.057 [2024-07-15 10:35:16.561101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:22.057 [2024-07-15 10:35:16.561120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:22.057 [2024-07-15 10:35:16.561134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:22.057 [2024-07-15 10:35:16.561153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.057 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.057 [2024-07-15 10:35:16.570847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:22.057 [2024-07-15 10:35:16.571114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.057 [2024-07-15 10:35:16.571143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb2c00 with addr=10.0.0.2, port=4420 00:22:22.057 [2024-07-15 10:35:16.571189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2c00 is same with the state(5) to be set 00:22:22.057 [2024-07-15 10:35:16.571214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb2c00 (9): Bad file descriptor 00:22:22.057 [2024-07-15 10:35:16.572106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:22.057 [2024-07-15 10:35:16.572131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:22.057 [2024-07-15 10:35:16.572145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:22.057 [2024-07-15 10:35:16.572201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:22.057 [2024-07-15 10:35:16.580934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:22.057 [2024-07-15 10:35:16.581120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.057 [2024-07-15 10:35:16.581149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb2c00 with addr=10.0.0.2, port=4420 00:22:22.057 [2024-07-15 10:35:16.581182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2c00 is same with the state(5) to be set 00:22:22.057 [2024-07-15 10:35:16.581206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb2c00 (9): Bad file descriptor 00:22:22.057 [2024-07-15 10:35:16.581246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:22.058 [2024-07-15 10:35:16.581266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:22.058 [2024-07-15 10:35:16.581287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:22.058 [2024-07-15 10:35:16.581323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:22.058 [2024-07-15 10:35:16.591007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:22.058 [2024-07-15 10:35:16.591187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.058 [2024-07-15 10:35:16.591233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb2c00 with addr=10.0.0.2, port=4420 00:22:22.058 [2024-07-15 10:35:16.591251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2c00 is same with the state(5) to be set 00:22:22.058 [2024-07-15 10:35:16.591275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb2c00 (9): Bad file descriptor 00:22:22.058 [2024-07-15 10:35:16.591334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:22.058 [2024-07-15 10:35:16.591356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:22.058 [2024-07-15 10:35:16.591371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:22.058 [2024-07-15 10:35:16.591392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.058 [2024-07-15 10:35:16.601077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:22.058 [2024-07-15 10:35:16.601313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.058 [2024-07-15 10:35:16.601344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb2c00 with addr=10.0.0.2, port=4420 00:22:22.058 [2024-07-15 10:35:16.601362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb2c00 is same with the state(5) to be set 00:22:22.058 [2024-07-15 10:35:16.601386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb2c00 (9): Bad file descriptor 00:22:22.058 [2024-07-15 10:35:16.601431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:22.058 [2024-07-15 10:35:16.601451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:22.058 [2024-07-15 10:35:16.601466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:22.058 [2024-07-15 10:35:16.601488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:22.058 [2024-07-15 10:35:16.608435] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:22.058 [2024-07-15 10:35:16.608469] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:22.058 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.318 10:35:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.253 [2024-07-15 10:35:17.882054] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:23.253 [2024-07-15 10:35:17.882094] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:23.253 [2024-07-15 10:35:17.882123] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:23.511 [2024-07-15 10:35:17.969402] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:23.511 [2024-07-15 10:35:18.078937] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:23.511 [2024-07-15 10:35:18.078978] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.511 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.511 request: 00:22:23.511 { 00:22:23.511 "name": "nvme", 00:22:23.511 "trtype": "tcp", 00:22:23.511 "traddr": "10.0.0.2", 00:22:23.511 "adrfam": "ipv4", 00:22:23.511 "trsvcid": "8009", 00:22:23.511 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:23.511 "wait_for_attach": true, 00:22:23.511 "method": "bdev_nvme_start_discovery", 00:22:23.511 "req_id": 1 00:22:23.511 } 00:22:23.511 Got JSON-RPC error response 00:22:23.511 response: 00:22:23.511 { 00:22:23.511 "code": -17, 00:22:23.511 "message": "File exists" 00:22:23.511 } 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:23.512 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.771 request: 00:22:23.771 { 00:22:23.771 "name": "nvme_second", 00:22:23.771 "trtype": "tcp", 00:22:23.771 "traddr": "10.0.0.2", 00:22:23.771 "adrfam": "ipv4", 00:22:23.771 "trsvcid": "8009", 00:22:23.771 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:23.771 "wait_for_attach": true, 00:22:23.771 "method": "bdev_nvme_start_discovery", 00:22:23.771 "req_id": 1 00:22:23.771 } 00:22:23.771 Got JSON-RPC error response 00:22:23.771 response: 00:22:23.771 { 00:22:23.771 "code": -17, 00:22:23.771 "message": "File exists" 00:22:23.771 } 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:23.771 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.772 10:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.711 [2024-07-15 10:35:19.298589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.711 [2024-07-15 10:35:19.298664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2090540 with addr=10.0.0.2, port=8010 00:22:24.711 [2024-07-15 10:35:19.298698] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:24.711 [2024-07-15 10:35:19.298715] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:24.711 [2024-07-15 10:35:19.298730] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:25.653 [2024-07-15 10:35:20.300939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.653 [2024-07-15 10:35:20.301001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2090540 with addr=10.0.0.2, port=8010 00:22:25.653 [2024-07-15 10:35:20.301034] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:25.653 [2024-07-15 10:35:20.301050] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:25.653 [2024-07-15 10:35:20.301064] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:27.032 [2024-07-15 10:35:21.303054] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:27.032 request: 00:22:27.032 { 00:22:27.032 "name": "nvme_second", 00:22:27.032 "trtype": "tcp", 00:22:27.032 "traddr": "10.0.0.2", 00:22:27.032 "adrfam": "ipv4", 00:22:27.032 "trsvcid": "8010", 00:22:27.032 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:27.032 "wait_for_attach": false, 00:22:27.032 "attach_timeout_ms": 3000, 00:22:27.032 "method": "bdev_nvme_start_discovery", 00:22:27.032 "req_id": 1 00:22:27.032 } 00:22:27.032 Got JSON-RPC error response 00:22:27.032 response: 00:22:27.032 { 00:22:27.032 "code": -110, 00:22:27.032 "message": "Connection timed out" 00:22:27.032 } 00:22:27.032 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:27.032 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:27.032 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:27.032 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:27.032 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:27.032 10:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:27.032 10:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:27.032 10:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:27.032 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2380507 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.033 rmmod nvme_tcp 00:22:27.033 rmmod nvme_fabrics 00:22:27.033 rmmod nvme_keyring 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2380485 ']' 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2380485 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2380485 ']' 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2380485 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2380485 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2380485' 00:22:27.033 killing process with pid 2380485 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2380485 00:22:27.033 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2380485 00:22:27.293 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.293 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:27.293 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:27.293 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.293 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.293 10:35:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.293 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.293 10:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.208 10:35:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.208 00:22:29.208 real 0m14.098s 00:22:29.208 user 0m21.066s 00:22:29.208 sys 0m2.911s 00:22:29.208 10:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:29.208 10:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.208 ************************************ 00:22:29.208 END TEST nvmf_host_discovery 00:22:29.208 ************************************ 00:22:29.208 10:35:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:29.208 10:35:23 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:29.208 10:35:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:29.208 10:35:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:29.208 10:35:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:29.208 ************************************ 00:22:29.208 START TEST nvmf_host_multipath_status 00:22:29.208 ************************************ 00:22:29.208 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:29.466 * Looking for test storage... 00:22:29.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:29.466 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:29.467 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:29.467 10:35:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:31.368 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:31.368 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:31.368 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:31.368 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:31.368 10:35:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.626 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.626 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.626 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:31.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:22:31.626 00:22:31.626 --- 10.0.0.2 ping statistics --- 00:22:31.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.626 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:31.626 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:22:31.626 00:22:31.627 --- 10.0.0.1 ping statistics --- 00:22:31.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.627 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2383668 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2383668 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2383668 ']' 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.627 10:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:31.627 [2024-07-15 10:35:26.130742] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:31.627 [2024-07-15 10:35:26.130837] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.627 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.627 [2024-07-15 10:35:26.199468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:31.885 [2024-07-15 10:35:26.314954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.885 [2024-07-15 10:35:26.315020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.885 [2024-07-15 10:35:26.315045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.885 [2024-07-15 10:35:26.315058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.885 [2024-07-15 10:35:26.315070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.885 [2024-07-15 10:35:26.315141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.885 [2024-07-15 10:35:26.315148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.842 10:35:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.842 10:35:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:32.842 10:35:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:32.842 10:35:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:32.842 10:35:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:32.842 10:35:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.842 10:35:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2383668 00:22:32.842 10:35:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:32.842 [2024-07-15 10:35:27.409513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.843 10:35:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:33.103 Malloc0 00:22:33.363 10:35:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:33.620 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:33.620 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.878 [2024-07-15 10:35:28.490291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.878 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:34.136 [2024-07-15 10:35:28.734987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:34.136 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2383968 00:22:34.136 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:34.136 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.136 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2383968 /var/tmp/bdevperf.sock 00:22:34.136 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2383968 ']' 00:22:34.136 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.136 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.136 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.136 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.136 10:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:34.700 10:35:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.700 10:35:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:34.700 10:35:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:34.700 10:35:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:35.267 Nvme0n1 00:22:35.267 10:35:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:35.525 Nvme0n1 00:22:35.525 10:35:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:35.525 10:35:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:38.055 10:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:38.055 10:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:38.055 10:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:38.055 10:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:38.989 10:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:38.989 10:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:38.990 10:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.990 10:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:39.246 10:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.246 10:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:39.246 10:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.246 10:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:39.503 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:39.503 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:39.503 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.503 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:39.760 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.760 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:39.760 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.760 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:40.017 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.017 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:40.017 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.017 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:40.275 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.275 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:40.275 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.275 10:35:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:40.532 10:35:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.532 10:35:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:40.532 10:35:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:40.790 10:35:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:41.356 10:35:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:42.294 10:35:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:42.294 10:35:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:42.294 10:35:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.294 10:35:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:42.553 10:35:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:42.553 10:35:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:42.553 10:35:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.553 10:35:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:42.810 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.810 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:42.810 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.810 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:43.069 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.069 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:43.069 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.069 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:43.327 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.327 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:43.327 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.327 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:43.586 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.586 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:43.586 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.586 10:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:43.845 10:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.845 10:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:43.845 10:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:43.845 10:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:44.103 10:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:45.479 10:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:45.479 10:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:45.479 10:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.479 10:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:45.479 10:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.479 10:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:45.479 10:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.479 10:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:45.738 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:45.738 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:45.738 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.738 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:45.996 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.996 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:45.996 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.996 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:46.254 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.254 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:46.254 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.254 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:46.511 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.511 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:46.511 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.511 10:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:46.769 10:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.769 10:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:46.769 10:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:47.027 10:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:47.314 10:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:48.251 10:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:48.252 10:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:48.252 10:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.252 10:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:48.510 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.510 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:48.510 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.510 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:48.767 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:48.767 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:48.767 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.767 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:49.024 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.024 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:49.024 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.024 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:49.281 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.281 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:49.281 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.281 10:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:49.537 10:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.537 10:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:49.537 10:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.537 10:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:49.795 10:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:49.795 10:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:49.795 10:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:50.052 10:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:50.310 10:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:51.248 10:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:51.248 10:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:51.248 10:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.248 10:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:51.505 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.506 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:51.506 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.506 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:51.763 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.763 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:51.763 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.763 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:52.021 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.021 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:52.021 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.021 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:52.279 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.279 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:52.279 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.279 10:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:52.537 10:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:52.537 10:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:52.537 10:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.537 10:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:52.795 10:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:52.795 10:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:52.795 10:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:53.053 10:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:53.309 10:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:54.247 10:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:54.247 10:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:54.247 10:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.247 10:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:54.506 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:54.506 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:54.506 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.506 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:54.764 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.764 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:54.764 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.764 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:55.022 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.022 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:55.022 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.022 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.280 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.280 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:55.280 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.280 10:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:55.538 10:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:55.538 10:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:55.538 10:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.538 10:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:55.795 10:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.795 10:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:56.052 10:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:56.052 10:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:56.310 10:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:56.568 10:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:57.502 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:57.502 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:57.502 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.502 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:57.759 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.759 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:57.759 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.759 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:58.028 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.028 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:58.028 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.028 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.285 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.285 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.285 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.285 10:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:58.543 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.543 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:58.543 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.543 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:58.800 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.800 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:58.800 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.800 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:59.057 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.057 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:59.057 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:59.315 10:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:59.572 10:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:00.506 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:00.506 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:00.506 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.506 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:00.770 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:00.770 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:00.770 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.770 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.028 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.028 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.028 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.028 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:01.286 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.286 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.286 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.286 10:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:01.543 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.543 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:01.543 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.543 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:01.801 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.801 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:01.801 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.801 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.058 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.058 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:02.058 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:02.315 10:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:02.574 10:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:03.954 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:03.954 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:03.954 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.954 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:03.954 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.954 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:03.954 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.954 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.212 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.212 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.212 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.212 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:04.469 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.469 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:04.469 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.469 10:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:04.726 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.726 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:04.726 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.726 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:04.984 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.984 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:04.984 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.984 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.240 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.240 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:05.240 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:05.498 10:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:05.757 10:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:06.691 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:06.691 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:06.691 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.691 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:06.948 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.948 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:06.948 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.948 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.206 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.206 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.206 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.206 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.463 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.463 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.463 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.463 10:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:07.720 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.720 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:07.720 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.720 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:07.977 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.977 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:07.977 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.977 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2383968 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2383968 ']' 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2383968 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2383968 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2383968' 00:23:08.236 killing process with pid 2383968 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2383968 00:23:08.236 10:36:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2383968 00:23:08.503 Connection closed with partial response: 00:23:08.503 00:23:08.503 00:23:08.503 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2383968 00:23:08.503 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:08.503 [2024-07-15 10:35:28.793734] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:08.503 [2024-07-15 10:35:28.793822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383968 ] 00:23:08.503 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.503 [2024-07-15 10:35:28.857358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.503 [2024-07-15 10:35:28.967932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.503 Running I/O for 90 seconds... 00:23:08.503 [2024-07-15 10:35:44.531629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.531702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.531808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.531830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.531869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.531895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.531934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.531952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.531975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.531992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.532032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.532070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.532110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.532496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.532543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.532597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.532638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.532677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.532716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.532754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.532808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.532845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.532907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.532962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.532985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.503 [2024-07-15 10:35:44.533549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.533573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.533589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.534338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.534361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.534391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.534408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.534434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.534450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.534480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.534497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:08.503 [2024-07-15 10:35:44.534522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.503 [2024-07-15 10:35:44.534538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.534563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.534579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.534604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.534620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.534644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.534660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.534684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.534701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.534726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.534757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.534782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.534798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.534837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.534852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.534882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.534915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.534940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.534956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.534980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.534997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.535040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.535081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.535192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.535240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.535282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.535325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.535367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.504 [2024-07-15 10:35:44.535409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.535968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.535984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.536962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.536988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.537964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.537982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.538011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.538029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:08.504 [2024-07-15 10:35:44.538058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.504 [2024-07-15 10:35:44.538080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:35:44.538902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:35:44.538951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:35:44.538981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:35:44.538999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.183964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.183981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:36:00.184082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.505 [2024-07-15 10:36:00.184662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.184724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.184741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.185188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.185213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.185241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.185261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.185284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.185301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.185324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.185340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.185363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.185380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.185402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.185419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.185441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.505 [2024-07-15 10:36:00.185458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:08.505 [2024-07-15 10:36:00.185480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.185502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.185544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.185599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.185653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.185692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.185729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.185768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.185811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.185847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.185908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.185949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.185986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.186004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.186043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.186087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.186126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.186948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.186964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.187002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.187019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.187041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.187058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.187080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.187101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.187125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.187141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.187166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.187191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.188640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.188944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.188967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.188983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.189037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.189391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.189437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.189477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.189517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.189561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.189602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.189642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.189683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.189723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.506 [2024-07-15 10:36:00.189763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.189803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.189842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.189895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.189939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.189962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.506 [2024-07-15 10:36:00.189979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:08.506 [2024-07-15 10:36:00.190002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.190019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.190043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.190060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.190091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.190109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.191980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.192007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.192054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.192095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.192639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.192677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.192715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.192752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.192789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.192826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.192863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.192961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.192981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.193005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.193022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.193044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.193061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.193084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.193100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.193122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.193138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.193161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.193178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.193216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.193232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.193253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.193277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.193299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.193315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.194643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.194670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.194698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.194718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.194742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.194759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.194782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.194803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.194827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.194844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.194866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.194892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.194928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.194945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.194968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.194985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.195024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.195970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.195993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.196009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.196048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.196088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.196126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.196181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.196236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.196274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.196325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.196371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.196427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.196467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.196506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.196546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.196585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.196624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.507 [2024-07-15 10:36:00.196663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:08.507 [2024-07-15 10:36:00.196702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.507 [2024-07-15 10:36:00.196718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.196755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.196772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.196798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.196814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.196837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.196854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.196884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.196907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.196931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.196948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.196971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.196987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.197010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.197027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.197050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.197067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.198482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.198528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.198568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.198624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.198663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.198716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.198767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.198814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.198856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.198908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.198949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.198972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.198989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.199029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.199068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.199108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.199147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.199187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.199227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.199266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.199306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.199351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.199390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.199445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.199499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.199538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.199576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.199614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.199651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.199673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.199689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.202313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.202360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.202806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.202846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.202898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.202968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.202986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.203009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.203027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.203049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.203066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.203089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.203107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.203130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.203147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.203177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.203209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.203231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.203247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.203283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.203299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.203321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.203337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.203357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.508 [2024-07-15 10:36:00.203373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:08.508 [2024-07-15 10:36:00.203394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.508 [2024-07-15 10:36:00.203410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.203451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.203488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.203524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.203560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.203597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.203634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.203670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.203706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.203743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.203780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.203815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.203851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.203939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.203963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.203980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.204003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.204020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.204043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.204060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.205737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.205764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.205792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.205826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.205850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.205867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.205925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.205943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.205965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.205982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.206265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.206514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.206933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.206961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.206980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.207021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.207146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.207422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.207461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.207706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.207746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.207785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.207840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.207911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.207937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.207955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.209504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.209551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.209592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.209637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.209678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.209718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.209758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.209798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.209837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.209883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.209927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.509 [2024-07-15 10:36:00.209967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.209989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.210007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.509 [2024-07-15 10:36:00.210029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.509 [2024-07-15 10:36:00.210046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.210086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.210126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.210191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.210228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.210265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.210300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.210336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.210372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.210408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.210444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.210480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.210516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.210553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.210574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.210589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.211772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.211798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.211826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.211845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.211868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.211900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.211924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.211943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.211965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.211982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.212022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.212062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.212102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.212142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.212650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.212697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.212737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.212782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.212840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.212907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.212947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.212970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.212987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.213009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.213026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.213049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.213065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.213087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.213104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.213127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.213143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.213166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.213182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.213220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.213246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.213268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.213284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.214403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.214465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.214521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.214557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.214612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.214651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.214690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.214729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.214770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.214811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.214850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.214898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.214942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.214969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.214987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.215026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.215065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.215104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.215143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.215212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.215263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.215298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.215335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.215370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.215406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.510 [2024-07-15 10:36:00.215442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.510 [2024-07-15 10:36:00.215482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:08.510 [2024-07-15 10:36:00.215502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.215518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.215538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.215555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.215575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.215590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.215612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.215628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.217508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.217555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.217596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.217635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.217674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.217714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.217768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.217813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.217867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.217935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.217975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.217997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.218014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.218053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.218091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.218130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.218177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.218216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.218281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.218319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.218379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.218419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.218459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.218497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.218536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.218576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.218614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.218638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.218654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.219453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.219513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.219565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.219603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.219643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.219697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.219737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.219775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.219817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.219889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.219948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.219970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.219987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.220009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.220026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.220048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.220064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.220086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.220102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.220125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.220141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.220164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.220180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.220208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.220236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.221697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.221735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.221765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.221783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.221807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.221823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.221846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.221863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.221893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.221912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.221945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.221961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.221984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.222001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.222040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.222079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.222119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.222176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.222246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.222300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.222336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.222372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.222407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.222444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.222479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.222515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.222551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.511 [2024-07-15 10:36:00.222586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.222623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.511 [2024-07-15 10:36:00.222658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:08.511 [2024-07-15 10:36:00.222678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.222697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.222718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.222734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.222755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.222770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.222792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.222808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.222829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.222843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.222889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.222908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.222931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.222947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.224862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.224896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.224929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.224947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.224971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.224987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.225366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.225402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.225455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.225507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.225689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.225712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.225729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.226840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.226867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.226906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.226931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.226954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.226986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.227079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.227119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.227442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.227481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.227520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.227690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.227743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.227793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.227829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.227891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.227974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.227996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.228012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.228033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.228049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.228071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.228087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.228109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.228125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.228146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.228163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.228200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.228216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.228236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.228251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.228272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.228287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.228308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.228323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.228343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.228362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.228384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.228399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.229845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.512 [2024-07-15 10:36:00.229870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.229907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.229926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.229949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.229966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.229989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.230006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.230028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.230045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.230068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.230085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.230107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.230124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.230147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.230163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:08.512 [2024-07-15 10:36:00.230561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.512 [2024-07-15 10:36:00.230586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.230615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.230633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.230657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.230678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.230703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.230720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.230743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.230759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.230782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.230799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.230822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.230839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.230861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.230887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.230913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.230934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.230957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.230974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.230997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.231013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.231036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.231054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.231076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.231093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.231116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.231132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.231155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.231177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.231204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.231221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.231243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.231260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.231282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.231299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.231322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.231338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.231361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.231378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.233229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.233285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.233325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.233377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.233669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.233705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.233741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.233777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.233971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.233993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.234009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.234031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.234047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.234069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.234085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.234107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.234123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.234145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.234176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.234197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.234213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.234250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.234265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.234286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.234301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.234322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.234337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.235669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.235695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.235738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.235765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.235790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.235809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.235831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.235849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.235871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.235898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.235923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.235940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.235963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.235980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.236002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.236019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.236042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.236059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.237485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.237525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.237552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.237569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.237596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.237612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.237633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.513 [2024-07-15 10:36:00.237649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.237670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.237685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:08.513 [2024-07-15 10:36:00.237712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.513 [2024-07-15 10:36:00.237729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:08.514 [2024-07-15 10:36:00.237750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.514 [2024-07-15 10:36:00.237766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:08.514 [2024-07-15 10:36:00.237787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.514 [2024-07-15 10:36:00.237817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:08.514 [2024-07-15 10:36:00.237838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.514 [2024-07-15 10:36:00.237854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:08.514 [2024-07-15 10:36:00.237881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.514 [2024-07-15 10:36:00.237923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:08.514 Received shutdown signal, test time was about 32.516395 seconds 00:23:08.514 00:23:08.514 Latency(us) 00:23:08.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.514 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.514 Verification LBA range: start 0x0 length 0x4000 00:23:08.514 Nvme0n1 : 32.52 7876.11 30.77 0.00 0.00 16225.36 964.84 4026531.84 00:23:08.514 =================================================================================================================== 00:23:08.514 Total : 7876.11 30.77 0.00 0.00 16225.36 964.84 4026531.84 00:23:08.514 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.771 rmmod nvme_tcp 00:23:08.771 rmmod nvme_fabrics 00:23:08.771 rmmod nvme_keyring 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2383668 ']' 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2383668 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2383668 ']' 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2383668 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2383668 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2383668' 00:23:08.771 killing process with pid 2383668 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2383668 00:23:08.771 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2383668 00:23:09.337 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.337 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.337 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.337 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.337 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.337 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.337 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.337 10:36:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.236 10:36:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:11.236 00:23:11.236 real 0m41.936s 00:23:11.236 user 2m6.102s 00:23:11.236 sys 0m10.437s 00:23:11.236 10:36:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:11.236 10:36:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:11.236 ************************************ 00:23:11.236 END TEST nvmf_host_multipath_status 00:23:11.236 ************************************ 00:23:11.236 10:36:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:11.236 10:36:05 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:11.236 10:36:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:11.236 10:36:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.236 10:36:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:11.236 ************************************ 00:23:11.236 START TEST nvmf_discovery_remove_ifc 00:23:11.236 ************************************ 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:11.236 * Looking for test storage... 00:23:11.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.236 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:11.494 10:36:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:13.423 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:13.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:13.423 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:13.423 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.423 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:13.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:23:13.424 00:23:13.424 --- 10.0.0.2 ping statistics --- 00:23:13.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.424 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:23:13.424 00:23:13.424 --- 10.0.0.1 ping statistics --- 00:23:13.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.424 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2390564 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2390564 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2390564 ']' 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.424 10:36:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.424 [2024-07-15 10:36:08.033925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:13.424 [2024-07-15 10:36:08.034006] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.424 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.681 [2024-07-15 10:36:08.104131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.681 [2024-07-15 10:36:08.224921] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.681 [2024-07-15 10:36:08.224969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.682 [2024-07-15 10:36:08.224993] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.682 [2024-07-15 10:36:08.225005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.682 [2024-07-15 10:36:08.225015] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.682 [2024-07-15 10:36:08.225042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.644 10:36:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.644 10:36:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:14.644 10:36:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.644 10:36:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.644 10:36:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.644 [2024-07-15 10:36:09.012698] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.644 [2024-07-15 10:36:09.020852] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:14.644 null0 00:23:14.644 [2024-07-15 10:36:09.052825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2390923 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2390923 /tmp/host.sock 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2390923 ']' 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:14.644 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.644 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.644 [2024-07-15 10:36:09.118312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:14.644 [2024-07-15 10:36:09.118405] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390923 ] 00:23:14.644 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.644 [2024-07-15 10:36:09.177117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.644 [2024-07-15 10:36:09.285300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.903 10:36:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:15.839 [2024-07-15 10:36:10.460615] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:15.839 [2024-07-15 10:36:10.460662] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:15.839 [2024-07-15 10:36:10.460691] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:16.098 [2024-07-15 10:36:10.589097] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:16.356 [2024-07-15 10:36:10.775213] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:16.356 [2024-07-15 10:36:10.775285] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:16.356 [2024-07-15 10:36:10.775331] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:16.356 [2024-07-15 10:36:10.775365] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:16.356 [2024-07-15 10:36:10.775404] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.356 [2024-07-15 10:36:10.779245] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb3a870 was disconnected and freed. delete nvme_qpair. 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.356 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.357 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:16.357 10:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:17.293 10:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.293 10:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.293 10:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.293 10:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.293 10:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.293 10:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.293 10:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.293 10:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.551 10:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:17.551 10:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:18.487 10:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:18.487 10:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.487 10:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:18.487 10:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.487 10:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:18.487 10:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:18.487 10:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:18.487 10:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.487 10:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:18.487 10:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:19.422 10:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:19.422 10:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.422 10:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:19.422 10:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.422 10:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:19.422 10:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:19.422 10:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:19.422 10:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.422 10:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:19.422 10:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:20.799 10:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.799 10:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.799 10:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.799 10:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.799 10:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.799 10:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.799 10:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.799 10:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.799 10:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:20.799 10:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.737 10:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.737 10:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.738 10:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.738 10:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.738 10:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.738 10:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.738 10:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.738 10:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.738 10:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:21.738 10:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.738 [2024-07-15 10:36:16.216211] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:21.738 [2024-07-15 10:36:16.216287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.738 [2024-07-15 10:36:16.216323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 10:36:16.216347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.738 [2024-07-15 10:36:16.216362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 10:36:16.216377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.738 [2024-07-15 10:36:16.216392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 10:36:16.216409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.738 [2024-07-15 10:36:16.216424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 10:36:16.216439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.738 [2024-07-15 10:36:16.216455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.738 [2024-07-15 10:36:16.216470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01300 is same with the state(5) to be set 00:23:21.738 [2024-07-15 10:36:16.226226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb01300 (9): Bad file descriptor 00:23:21.738 [2024-07-15 10:36:16.236276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:22.674 10:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:22.674 10:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.674 10:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.674 10:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.674 10:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:22.674 10:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:22.674 10:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:22.674 [2024-07-15 10:36:17.298923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:22.674 [2024-07-15 10:36:17.298978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb01300 with addr=10.0.0.2, port=4420 00:23:22.674 [2024-07-15 10:36:17.299006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01300 is same with the state(5) to be set 00:23:22.674 [2024-07-15 10:36:17.299049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb01300 (9): Bad file descriptor 00:23:22.674 [2024-07-15 10:36:17.299537] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:22.674 [2024-07-15 10:36:17.299573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:22.674 [2024-07-15 10:36:17.299593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:22.674 [2024-07-15 10:36:17.299612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:22.674 [2024-07-15 10:36:17.299641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.674 [2024-07-15 10:36:17.299662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:22.674 10:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.674 10:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:22.674 10:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:24.046 [2024-07-15 10:36:18.302173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.046 [2024-07-15 10:36:18.302200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.047 [2024-07-15 10:36:18.302213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.047 [2024-07-15 10:36:18.302242] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:24.047 [2024-07-15 10:36:18.302265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.047 [2024-07-15 10:36:18.302311] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:24.047 [2024-07-15 10:36:18.302351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.047 [2024-07-15 10:36:18.302375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.047 [2024-07-15 10:36:18.302396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.047 [2024-07-15 10:36:18.302411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.047 [2024-07-15 10:36:18.302429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.047 [2024-07-15 10:36:18.302444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.047 [2024-07-15 10:36:18.302460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.047 [2024-07-15 10:36:18.302475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.047 [2024-07-15 10:36:18.302492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.047 [2024-07-15 10:36:18.302507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.047 [2024-07-15 10:36:18.302522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:24.047 [2024-07-15 10:36:18.302651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb00780 (9): Bad file descriptor 00:23:24.047 [2024-07-15 10:36:18.303677] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:24.047 [2024-07-15 10:36:18.303702] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:24.047 10:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:24.983 10:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.983 10:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.983 10:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.983 10:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.983 10:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.983 10:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.983 10:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.983 10:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.983 10:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:24.983 10:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:25.928 [2024-07-15 10:36:20.357792] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:25.928 [2024-07-15 10:36:20.357831] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:25.928 [2024-07-15 10:36:20.357860] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.928 [2024-07-15 10:36:20.485288] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:25.928 10:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:25.928 10:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.928 10:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:25.928 10:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.928 10:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 10:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:25.928 10:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:25.928 10:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.928 10:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:25.928 10:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:26.185 [2024-07-15 10:36:20.588446] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:26.185 [2024-07-15 10:36:20.588505] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:26.185 [2024-07-15 10:36:20.588555] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:26.185 [2024-07-15 10:36:20.588585] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:26.185 [2024-07-15 10:36:20.588601] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:26.185 [2024-07-15 10:36:20.595330] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb08110 was disconnected and freed. delete nvme_qpair. 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2390923 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2390923 ']' 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2390923 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2390923 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2390923' 00:23:27.120 killing process with pid 2390923 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2390923 00:23:27.120 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2390923 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.378 rmmod nvme_tcp 00:23:27.378 rmmod nvme_fabrics 00:23:27.378 rmmod nvme_keyring 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2390564 ']' 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2390564 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2390564 ']' 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2390564 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2390564 00:23:27.378 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:27.379 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:27.379 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2390564' 00:23:27.379 killing process with pid 2390564 00:23:27.379 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2390564 00:23:27.379 10:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2390564 00:23:27.637 10:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.637 10:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.637 10:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.637 10:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.637 10:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.637 10:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.637 10:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.637 10:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.218 10:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:30.218 00:23:30.218 real 0m18.494s 00:23:30.218 user 0m26.802s 00:23:30.218 sys 0m3.027s 00:23:30.218 10:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:30.218 10:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.218 ************************************ 00:23:30.218 END TEST nvmf_discovery_remove_ifc 00:23:30.218 ************************************ 00:23:30.218 10:36:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:30.218 10:36:24 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:30.218 10:36:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:30.218 10:36:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.218 10:36:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:30.218 ************************************ 00:23:30.218 START TEST nvmf_identify_kernel_target 00:23:30.218 ************************************ 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:30.218 * Looking for test storage... 00:23:30.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:30.218 10:36:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:32.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:32.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:32.152 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:32.152 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.152 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:32.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:23:32.153 00:23:32.153 --- 10.0.0.2 ping statistics --- 00:23:32.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.153 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:23:32.153 00:23:32.153 --- 10.0.0.1 ping statistics --- 00:23:32.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.153 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:32.153 10:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:33.090 Waiting for block devices as requested 00:23:33.090 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:33.090 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:33.347 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:33.347 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:33.347 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:33.606 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:33.606 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:33.606 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:33.606 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:33.866 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:33.866 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:33.866 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:33.866 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:34.124 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:34.124 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:34.125 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:34.383 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:34.383 No valid GPT data, bailing 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:34.383 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:34.384 10:36:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:34.384 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:34.643 00:23:34.643 Discovery Log Number of Records 2, Generation counter 2 00:23:34.643 =====Discovery Log Entry 0====== 00:23:34.643 trtype: tcp 00:23:34.643 adrfam: ipv4 00:23:34.643 subtype: current discovery subsystem 00:23:34.643 treq: not specified, sq flow control disable supported 00:23:34.643 portid: 1 00:23:34.643 trsvcid: 4420 00:23:34.643 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:34.643 traddr: 10.0.0.1 00:23:34.643 eflags: none 00:23:34.643 sectype: none 00:23:34.643 =====Discovery Log Entry 1====== 00:23:34.643 trtype: tcp 00:23:34.643 adrfam: ipv4 00:23:34.643 subtype: nvme subsystem 00:23:34.643 treq: not specified, sq flow control disable supported 00:23:34.643 portid: 1 00:23:34.643 trsvcid: 4420 00:23:34.643 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:34.643 traddr: 10.0.0.1 00:23:34.643 eflags: none 00:23:34.643 sectype: none 00:23:34.643 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:34.643 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:34.643 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.643 ===================================================== 00:23:34.643 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:34.643 ===================================================== 00:23:34.643 Controller Capabilities/Features 00:23:34.643 ================================ 00:23:34.643 Vendor ID: 0000 00:23:34.643 Subsystem Vendor ID: 0000 00:23:34.643 Serial Number: 24718ede6b5eb1e614b1 00:23:34.643 Model Number: Linux 00:23:34.643 Firmware Version: 6.7.0-68 00:23:34.643 Recommended Arb Burst: 0 00:23:34.643 IEEE OUI Identifier: 00 00 00 00:23:34.643 Multi-path I/O 00:23:34.643 May have multiple subsystem ports: No 00:23:34.643 May have multiple controllers: No 00:23:34.643 Associated with SR-IOV VF: No 00:23:34.643 Max Data Transfer Size: Unlimited 00:23:34.643 Max Number of Namespaces: 0 00:23:34.643 Max Number of I/O Queues: 1024 00:23:34.643 NVMe Specification Version (VS): 1.3 00:23:34.643 NVMe Specification Version (Identify): 1.3 00:23:34.643 Maximum Queue Entries: 1024 00:23:34.643 Contiguous Queues Required: No 00:23:34.643 Arbitration Mechanisms Supported 00:23:34.643 Weighted Round Robin: Not Supported 00:23:34.643 Vendor Specific: Not Supported 00:23:34.643 Reset Timeout: 7500 ms 00:23:34.643 Doorbell Stride: 4 bytes 00:23:34.643 NVM Subsystem Reset: Not Supported 00:23:34.643 Command Sets Supported 00:23:34.643 NVM Command Set: Supported 00:23:34.643 Boot Partition: Not Supported 00:23:34.643 Memory Page Size Minimum: 4096 bytes 00:23:34.643 Memory Page Size Maximum: 4096 bytes 00:23:34.643 Persistent Memory Region: Not Supported 00:23:34.643 Optional Asynchronous Events Supported 00:23:34.643 Namespace Attribute Notices: Not Supported 00:23:34.643 Firmware Activation Notices: Not Supported 00:23:34.643 ANA Change Notices: Not Supported 00:23:34.643 PLE Aggregate Log Change Notices: Not Supported 00:23:34.643 LBA Status Info Alert Notices: Not Supported 00:23:34.643 EGE Aggregate Log Change Notices: Not Supported 00:23:34.643 Normal NVM Subsystem Shutdown event: Not Supported 00:23:34.643 Zone Descriptor Change Notices: Not Supported 00:23:34.643 Discovery Log Change Notices: Supported 00:23:34.643 Controller Attributes 00:23:34.643 128-bit Host Identifier: Not Supported 00:23:34.643 Non-Operational Permissive Mode: Not Supported 00:23:34.643 NVM Sets: Not Supported 00:23:34.643 Read Recovery Levels: Not Supported 00:23:34.643 Endurance Groups: Not Supported 00:23:34.643 Predictable Latency Mode: Not Supported 00:23:34.643 Traffic Based Keep ALive: Not Supported 00:23:34.643 Namespace Granularity: Not Supported 00:23:34.643 SQ Associations: Not Supported 00:23:34.643 UUID List: Not Supported 00:23:34.643 Multi-Domain Subsystem: Not Supported 00:23:34.643 Fixed Capacity Management: Not Supported 00:23:34.643 Variable Capacity Management: Not Supported 00:23:34.643 Delete Endurance Group: Not Supported 00:23:34.643 Delete NVM Set: Not Supported 00:23:34.643 Extended LBA Formats Supported: Not Supported 00:23:34.643 Flexible Data Placement Supported: Not Supported 00:23:34.643 00:23:34.643 Controller Memory Buffer Support 00:23:34.643 ================================ 00:23:34.643 Supported: No 00:23:34.643 00:23:34.643 Persistent Memory Region Support 00:23:34.643 ================================ 00:23:34.643 Supported: No 00:23:34.643 00:23:34.643 Admin Command Set Attributes 00:23:34.643 ============================ 00:23:34.643 Security Send/Receive: Not Supported 00:23:34.643 Format NVM: Not Supported 00:23:34.643 Firmware Activate/Download: Not Supported 00:23:34.643 Namespace Management: Not Supported 00:23:34.643 Device Self-Test: Not Supported 00:23:34.643 Directives: Not Supported 00:23:34.643 NVMe-MI: Not Supported 00:23:34.643 Virtualization Management: Not Supported 00:23:34.643 Doorbell Buffer Config: Not Supported 00:23:34.643 Get LBA Status Capability: Not Supported 00:23:34.643 Command & Feature Lockdown Capability: Not Supported 00:23:34.643 Abort Command Limit: 1 00:23:34.643 Async Event Request Limit: 1 00:23:34.643 Number of Firmware Slots: N/A 00:23:34.643 Firmware Slot 1 Read-Only: N/A 00:23:34.643 Firmware Activation Without Reset: N/A 00:23:34.643 Multiple Update Detection Support: N/A 00:23:34.643 Firmware Update Granularity: No Information Provided 00:23:34.643 Per-Namespace SMART Log: No 00:23:34.643 Asymmetric Namespace Access Log Page: Not Supported 00:23:34.643 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:34.643 Command Effects Log Page: Not Supported 00:23:34.643 Get Log Page Extended Data: Supported 00:23:34.643 Telemetry Log Pages: Not Supported 00:23:34.643 Persistent Event Log Pages: Not Supported 00:23:34.643 Supported Log Pages Log Page: May Support 00:23:34.643 Commands Supported & Effects Log Page: Not Supported 00:23:34.643 Feature Identifiers & Effects Log Page:May Support 00:23:34.643 NVMe-MI Commands & Effects Log Page: May Support 00:23:34.643 Data Area 4 for Telemetry Log: Not Supported 00:23:34.643 Error Log Page Entries Supported: 1 00:23:34.643 Keep Alive: Not Supported 00:23:34.643 00:23:34.643 NVM Command Set Attributes 00:23:34.643 ========================== 00:23:34.643 Submission Queue Entry Size 00:23:34.643 Max: 1 00:23:34.643 Min: 1 00:23:34.643 Completion Queue Entry Size 00:23:34.643 Max: 1 00:23:34.643 Min: 1 00:23:34.643 Number of Namespaces: 0 00:23:34.643 Compare Command: Not Supported 00:23:34.643 Write Uncorrectable Command: Not Supported 00:23:34.643 Dataset Management Command: Not Supported 00:23:34.643 Write Zeroes Command: Not Supported 00:23:34.643 Set Features Save Field: Not Supported 00:23:34.643 Reservations: Not Supported 00:23:34.643 Timestamp: Not Supported 00:23:34.643 Copy: Not Supported 00:23:34.643 Volatile Write Cache: Not Present 00:23:34.643 Atomic Write Unit (Normal): 1 00:23:34.643 Atomic Write Unit (PFail): 1 00:23:34.643 Atomic Compare & Write Unit: 1 00:23:34.643 Fused Compare & Write: Not Supported 00:23:34.643 Scatter-Gather List 00:23:34.643 SGL Command Set: Supported 00:23:34.643 SGL Keyed: Not Supported 00:23:34.643 SGL Bit Bucket Descriptor: Not Supported 00:23:34.643 SGL Metadata Pointer: Not Supported 00:23:34.643 Oversized SGL: Not Supported 00:23:34.643 SGL Metadata Address: Not Supported 00:23:34.643 SGL Offset: Supported 00:23:34.643 Transport SGL Data Block: Not Supported 00:23:34.643 Replay Protected Memory Block: Not Supported 00:23:34.643 00:23:34.643 Firmware Slot Information 00:23:34.643 ========================= 00:23:34.643 Active slot: 0 00:23:34.643 00:23:34.643 00:23:34.643 Error Log 00:23:34.643 ========= 00:23:34.643 00:23:34.643 Active Namespaces 00:23:34.643 ================= 00:23:34.643 Discovery Log Page 00:23:34.643 ================== 00:23:34.643 Generation Counter: 2 00:23:34.643 Number of Records: 2 00:23:34.643 Record Format: 0 00:23:34.643 00:23:34.643 Discovery Log Entry 0 00:23:34.643 ---------------------- 00:23:34.643 Transport Type: 3 (TCP) 00:23:34.643 Address Family: 1 (IPv4) 00:23:34.643 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:34.643 Entry Flags: 00:23:34.643 Duplicate Returned Information: 0 00:23:34.643 Explicit Persistent Connection Support for Discovery: 0 00:23:34.643 Transport Requirements: 00:23:34.643 Secure Channel: Not Specified 00:23:34.644 Port ID: 1 (0x0001) 00:23:34.644 Controller ID: 65535 (0xffff) 00:23:34.644 Admin Max SQ Size: 32 00:23:34.644 Transport Service Identifier: 4420 00:23:34.644 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:34.644 Transport Address: 10.0.0.1 00:23:34.644 Discovery Log Entry 1 00:23:34.644 ---------------------- 00:23:34.644 Transport Type: 3 (TCP) 00:23:34.644 Address Family: 1 (IPv4) 00:23:34.644 Subsystem Type: 2 (NVM Subsystem) 00:23:34.644 Entry Flags: 00:23:34.644 Duplicate Returned Information: 0 00:23:34.644 Explicit Persistent Connection Support for Discovery: 0 00:23:34.644 Transport Requirements: 00:23:34.644 Secure Channel: Not Specified 00:23:34.644 Port ID: 1 (0x0001) 00:23:34.644 Controller ID: 65535 (0xffff) 00:23:34.644 Admin Max SQ Size: 32 00:23:34.644 Transport Service Identifier: 4420 00:23:34.644 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:34.644 Transport Address: 10.0.0.1 00:23:34.644 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:34.644 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.644 get_feature(0x01) failed 00:23:34.644 get_feature(0x02) failed 00:23:34.644 get_feature(0x04) failed 00:23:34.644 ===================================================== 00:23:34.644 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:34.644 ===================================================== 00:23:34.644 Controller Capabilities/Features 00:23:34.644 ================================ 00:23:34.644 Vendor ID: 0000 00:23:34.644 Subsystem Vendor ID: 0000 00:23:34.644 Serial Number: 2f975151cfdc3bc95ac2 00:23:34.644 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:34.644 Firmware Version: 6.7.0-68 00:23:34.644 Recommended Arb Burst: 6 00:23:34.644 IEEE OUI Identifier: 00 00 00 00:23:34.644 Multi-path I/O 00:23:34.644 May have multiple subsystem ports: Yes 00:23:34.644 May have multiple controllers: Yes 00:23:34.644 Associated with SR-IOV VF: No 00:23:34.644 Max Data Transfer Size: Unlimited 00:23:34.644 Max Number of Namespaces: 1024 00:23:34.644 Max Number of I/O Queues: 128 00:23:34.644 NVMe Specification Version (VS): 1.3 00:23:34.644 NVMe Specification Version (Identify): 1.3 00:23:34.644 Maximum Queue Entries: 1024 00:23:34.644 Contiguous Queues Required: No 00:23:34.644 Arbitration Mechanisms Supported 00:23:34.644 Weighted Round Robin: Not Supported 00:23:34.644 Vendor Specific: Not Supported 00:23:34.644 Reset Timeout: 7500 ms 00:23:34.644 Doorbell Stride: 4 bytes 00:23:34.644 NVM Subsystem Reset: Not Supported 00:23:34.644 Command Sets Supported 00:23:34.644 NVM Command Set: Supported 00:23:34.644 Boot Partition: Not Supported 00:23:34.644 Memory Page Size Minimum: 4096 bytes 00:23:34.644 Memory Page Size Maximum: 4096 bytes 00:23:34.644 Persistent Memory Region: Not Supported 00:23:34.644 Optional Asynchronous Events Supported 00:23:34.644 Namespace Attribute Notices: Supported 00:23:34.644 Firmware Activation Notices: Not Supported 00:23:34.644 ANA Change Notices: Supported 00:23:34.644 PLE Aggregate Log Change Notices: Not Supported 00:23:34.644 LBA Status Info Alert Notices: Not Supported 00:23:34.644 EGE Aggregate Log Change Notices: Not Supported 00:23:34.644 Normal NVM Subsystem Shutdown event: Not Supported 00:23:34.644 Zone Descriptor Change Notices: Not Supported 00:23:34.644 Discovery Log Change Notices: Not Supported 00:23:34.644 Controller Attributes 00:23:34.644 128-bit Host Identifier: Supported 00:23:34.644 Non-Operational Permissive Mode: Not Supported 00:23:34.644 NVM Sets: Not Supported 00:23:34.644 Read Recovery Levels: Not Supported 00:23:34.644 Endurance Groups: Not Supported 00:23:34.644 Predictable Latency Mode: Not Supported 00:23:34.644 Traffic Based Keep ALive: Supported 00:23:34.644 Namespace Granularity: Not Supported 00:23:34.644 SQ Associations: Not Supported 00:23:34.644 UUID List: Not Supported 00:23:34.644 Multi-Domain Subsystem: Not Supported 00:23:34.644 Fixed Capacity Management: Not Supported 00:23:34.644 Variable Capacity Management: Not Supported 00:23:34.644 Delete Endurance Group: Not Supported 00:23:34.644 Delete NVM Set: Not Supported 00:23:34.644 Extended LBA Formats Supported: Not Supported 00:23:34.644 Flexible Data Placement Supported: Not Supported 00:23:34.644 00:23:34.644 Controller Memory Buffer Support 00:23:34.644 ================================ 00:23:34.644 Supported: No 00:23:34.644 00:23:34.644 Persistent Memory Region Support 00:23:34.644 ================================ 00:23:34.644 Supported: No 00:23:34.644 00:23:34.644 Admin Command Set Attributes 00:23:34.644 ============================ 00:23:34.644 Security Send/Receive: Not Supported 00:23:34.644 Format NVM: Not Supported 00:23:34.644 Firmware Activate/Download: Not Supported 00:23:34.644 Namespace Management: Not Supported 00:23:34.644 Device Self-Test: Not Supported 00:23:34.644 Directives: Not Supported 00:23:34.644 NVMe-MI: Not Supported 00:23:34.644 Virtualization Management: Not Supported 00:23:34.644 Doorbell Buffer Config: Not Supported 00:23:34.644 Get LBA Status Capability: Not Supported 00:23:34.644 Command & Feature Lockdown Capability: Not Supported 00:23:34.644 Abort Command Limit: 4 00:23:34.644 Async Event Request Limit: 4 00:23:34.644 Number of Firmware Slots: N/A 00:23:34.644 Firmware Slot 1 Read-Only: N/A 00:23:34.644 Firmware Activation Without Reset: N/A 00:23:34.644 Multiple Update Detection Support: N/A 00:23:34.644 Firmware Update Granularity: No Information Provided 00:23:34.644 Per-Namespace SMART Log: Yes 00:23:34.644 Asymmetric Namespace Access Log Page: Supported 00:23:34.644 ANA Transition Time : 10 sec 00:23:34.644 00:23:34.644 Asymmetric Namespace Access Capabilities 00:23:34.644 ANA Optimized State : Supported 00:23:34.644 ANA Non-Optimized State : Supported 00:23:34.644 ANA Inaccessible State : Supported 00:23:34.644 ANA Persistent Loss State : Supported 00:23:34.644 ANA Change State : Supported 00:23:34.644 ANAGRPID is not changed : No 00:23:34.644 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:34.644 00:23:34.644 ANA Group Identifier Maximum : 128 00:23:34.644 Number of ANA Group Identifiers : 128 00:23:34.644 Max Number of Allowed Namespaces : 1024 00:23:34.644 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:34.644 Command Effects Log Page: Supported 00:23:34.644 Get Log Page Extended Data: Supported 00:23:34.644 Telemetry Log Pages: Not Supported 00:23:34.644 Persistent Event Log Pages: Not Supported 00:23:34.644 Supported Log Pages Log Page: May Support 00:23:34.644 Commands Supported & Effects Log Page: Not Supported 00:23:34.644 Feature Identifiers & Effects Log Page:May Support 00:23:34.644 NVMe-MI Commands & Effects Log Page: May Support 00:23:34.644 Data Area 4 for Telemetry Log: Not Supported 00:23:34.644 Error Log Page Entries Supported: 128 00:23:34.644 Keep Alive: Supported 00:23:34.644 Keep Alive Granularity: 1000 ms 00:23:34.644 00:23:34.644 NVM Command Set Attributes 00:23:34.644 ========================== 00:23:34.644 Submission Queue Entry Size 00:23:34.644 Max: 64 00:23:34.644 Min: 64 00:23:34.644 Completion Queue Entry Size 00:23:34.644 Max: 16 00:23:34.644 Min: 16 00:23:34.644 Number of Namespaces: 1024 00:23:34.644 Compare Command: Not Supported 00:23:34.644 Write Uncorrectable Command: Not Supported 00:23:34.644 Dataset Management Command: Supported 00:23:34.644 Write Zeroes Command: Supported 00:23:34.644 Set Features Save Field: Not Supported 00:23:34.644 Reservations: Not Supported 00:23:34.644 Timestamp: Not Supported 00:23:34.644 Copy: Not Supported 00:23:34.644 Volatile Write Cache: Present 00:23:34.644 Atomic Write Unit (Normal): 1 00:23:34.644 Atomic Write Unit (PFail): 1 00:23:34.644 Atomic Compare & Write Unit: 1 00:23:34.644 Fused Compare & Write: Not Supported 00:23:34.644 Scatter-Gather List 00:23:34.644 SGL Command Set: Supported 00:23:34.644 SGL Keyed: Not Supported 00:23:34.644 SGL Bit Bucket Descriptor: Not Supported 00:23:34.644 SGL Metadata Pointer: Not Supported 00:23:34.644 Oversized SGL: Not Supported 00:23:34.644 SGL Metadata Address: Not Supported 00:23:34.644 SGL Offset: Supported 00:23:34.644 Transport SGL Data Block: Not Supported 00:23:34.644 Replay Protected Memory Block: Not Supported 00:23:34.644 00:23:34.644 Firmware Slot Information 00:23:34.644 ========================= 00:23:34.644 Active slot: 0 00:23:34.644 00:23:34.644 Asymmetric Namespace Access 00:23:34.644 =========================== 00:23:34.644 Change Count : 0 00:23:34.644 Number of ANA Group Descriptors : 1 00:23:34.644 ANA Group Descriptor : 0 00:23:34.644 ANA Group ID : 1 00:23:34.644 Number of NSID Values : 1 00:23:34.644 Change Count : 0 00:23:34.644 ANA State : 1 00:23:34.644 Namespace Identifier : 1 00:23:34.644 00:23:34.644 Commands Supported and Effects 00:23:34.644 ============================== 00:23:34.644 Admin Commands 00:23:34.644 -------------- 00:23:34.644 Get Log Page (02h): Supported 00:23:34.644 Identify (06h): Supported 00:23:34.644 Abort (08h): Supported 00:23:34.644 Set Features (09h): Supported 00:23:34.644 Get Features (0Ah): Supported 00:23:34.644 Asynchronous Event Request (0Ch): Supported 00:23:34.644 Keep Alive (18h): Supported 00:23:34.645 I/O Commands 00:23:34.645 ------------ 00:23:34.645 Flush (00h): Supported 00:23:34.645 Write (01h): Supported LBA-Change 00:23:34.645 Read (02h): Supported 00:23:34.645 Write Zeroes (08h): Supported LBA-Change 00:23:34.645 Dataset Management (09h): Supported 00:23:34.645 00:23:34.645 Error Log 00:23:34.645 ========= 00:23:34.645 Entry: 0 00:23:34.645 Error Count: 0x3 00:23:34.645 Submission Queue Id: 0x0 00:23:34.645 Command Id: 0x5 00:23:34.645 Phase Bit: 0 00:23:34.645 Status Code: 0x2 00:23:34.645 Status Code Type: 0x0 00:23:34.645 Do Not Retry: 1 00:23:34.645 Error Location: 0x28 00:23:34.645 LBA: 0x0 00:23:34.645 Namespace: 0x0 00:23:34.645 Vendor Log Page: 0x0 00:23:34.645 ----------- 00:23:34.645 Entry: 1 00:23:34.645 Error Count: 0x2 00:23:34.645 Submission Queue Id: 0x0 00:23:34.645 Command Id: 0x5 00:23:34.645 Phase Bit: 0 00:23:34.645 Status Code: 0x2 00:23:34.645 Status Code Type: 0x0 00:23:34.645 Do Not Retry: 1 00:23:34.645 Error Location: 0x28 00:23:34.645 LBA: 0x0 00:23:34.645 Namespace: 0x0 00:23:34.645 Vendor Log Page: 0x0 00:23:34.645 ----------- 00:23:34.645 Entry: 2 00:23:34.645 Error Count: 0x1 00:23:34.645 Submission Queue Id: 0x0 00:23:34.645 Command Id: 0x4 00:23:34.645 Phase Bit: 0 00:23:34.645 Status Code: 0x2 00:23:34.645 Status Code Type: 0x0 00:23:34.645 Do Not Retry: 1 00:23:34.645 Error Location: 0x28 00:23:34.645 LBA: 0x0 00:23:34.645 Namespace: 0x0 00:23:34.645 Vendor Log Page: 0x0 00:23:34.645 00:23:34.645 Number of Queues 00:23:34.645 ================ 00:23:34.645 Number of I/O Submission Queues: 128 00:23:34.645 Number of I/O Completion Queues: 128 00:23:34.645 00:23:34.645 ZNS Specific Controller Data 00:23:34.645 ============================ 00:23:34.645 Zone Append Size Limit: 0 00:23:34.645 00:23:34.645 00:23:34.645 Active Namespaces 00:23:34.645 ================= 00:23:34.645 get_feature(0x05) failed 00:23:34.645 Namespace ID:1 00:23:34.645 Command Set Identifier: NVM (00h) 00:23:34.645 Deallocate: Supported 00:23:34.645 Deallocated/Unwritten Error: Not Supported 00:23:34.645 Deallocated Read Value: Unknown 00:23:34.645 Deallocate in Write Zeroes: Not Supported 00:23:34.645 Deallocated Guard Field: 0xFFFF 00:23:34.645 Flush: Supported 00:23:34.645 Reservation: Not Supported 00:23:34.645 Namespace Sharing Capabilities: Multiple Controllers 00:23:34.645 Size (in LBAs): 1953525168 (931GiB) 00:23:34.645 Capacity (in LBAs): 1953525168 (931GiB) 00:23:34.645 Utilization (in LBAs): 1953525168 (931GiB) 00:23:34.645 UUID: 13d0ac8e-164b-43cf-baba-82440d2b22f8 00:23:34.645 Thin Provisioning: Not Supported 00:23:34.645 Per-NS Atomic Units: Yes 00:23:34.645 Atomic Boundary Size (Normal): 0 00:23:34.645 Atomic Boundary Size (PFail): 0 00:23:34.645 Atomic Boundary Offset: 0 00:23:34.645 NGUID/EUI64 Never Reused: No 00:23:34.645 ANA group ID: 1 00:23:34.645 Namespace Write Protected: No 00:23:34.645 Number of LBA Formats: 1 00:23:34.645 Current LBA Format: LBA Format #00 00:23:34.645 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:34.645 00:23:34.645 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:34.645 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:34.645 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:34.645 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:34.645 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:34.645 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:34.645 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:34.645 rmmod nvme_tcp 00:23:34.645 rmmod nvme_fabrics 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.903 10:36:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:36.809 10:36:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:38.186 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:38.186 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:38.186 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:38.186 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:38.186 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:38.186 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:38.186 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:38.186 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:38.186 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:38.186 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:38.186 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:38.186 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:38.186 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:38.186 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:38.186 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:38.186 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:39.121 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:39.121 00:23:39.121 real 0m9.368s 00:23:39.121 user 0m1.892s 00:23:39.121 sys 0m3.403s 00:23:39.121 10:36:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:39.121 10:36:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.121 ************************************ 00:23:39.121 END TEST nvmf_identify_kernel_target 00:23:39.121 ************************************ 00:23:39.121 10:36:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:39.121 10:36:33 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:39.121 10:36:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:39.121 10:36:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.121 10:36:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.379 ************************************ 00:23:39.379 START TEST nvmf_auth_host 00:23:39.379 ************************************ 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:39.379 * Looking for test storage... 00:23:39.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.379 10:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:41.280 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:41.280 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:41.280 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:41.280 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.280 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.538 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.538 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:23:41.538 00:23:41.538 --- 10.0.0.2 ping statistics --- 00:23:41.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.538 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:41.538 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:23:41.538 00:23:41.538 --- 10.0.0.1 ping statistics --- 00:23:41.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.538 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:41.538 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.538 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:41.538 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.538 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2398122 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2398122 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2398122 ']' 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.539 10:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c117c671147d16fdcae534073aefd1f6 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zeP 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c117c671147d16fdcae534073aefd1f6 0 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c117c671147d16fdcae534073aefd1f6 0 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c117c671147d16fdcae534073aefd1f6 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zeP 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zeP 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zeP 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=83edd7db998e39eb5e9cf4600ac791157d5664df6989e36e0dfb458e588d7abf 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wSr 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 83edd7db998e39eb5e9cf4600ac791157d5664df6989e36e0dfb458e588d7abf 3 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 83edd7db998e39eb5e9cf4600ac791157d5664df6989e36e0dfb458e588d7abf 3 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=83edd7db998e39eb5e9cf4600ac791157d5664df6989e36e0dfb458e588d7abf 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:41.797 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wSr 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wSr 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wSr 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9a0a090a72e24bf5d87f09e4f2c80a644b2f40552264f09a 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bA0 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9a0a090a72e24bf5d87f09e4f2c80a644b2f40552264f09a 0 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9a0a090a72e24bf5d87f09e4f2c80a644b2f40552264f09a 0 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9a0a090a72e24bf5d87f09e4f2c80a644b2f40552264f09a 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:42.055 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bA0 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bA0 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.bA0 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a50f73f09725849e2a5ef5a820c043800f8ca37d02089696 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.e3K 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a50f73f09725849e2a5ef5a820c043800f8ca37d02089696 2 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a50f73f09725849e2a5ef5a820c043800f8ca37d02089696 2 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a50f73f09725849e2a5ef5a820c043800f8ca37d02089696 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.e3K 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.e3K 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.e3K 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a7c2dd1d60f642cc1bf8c309b5c09d64 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0ls 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a7c2dd1d60f642cc1bf8c309b5c09d64 1 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a7c2dd1d60f642cc1bf8c309b5c09d64 1 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a7c2dd1d60f642cc1bf8c309b5c09d64 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0ls 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0ls 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0ls 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=375247be78a3c093dbb433a66aee40a2 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.612 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 375247be78a3c093dbb433a66aee40a2 1 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 375247be78a3c093dbb433a66aee40a2 1 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=375247be78a3c093dbb433a66aee40a2 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.612 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.612 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.612 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ae44fd7a2684108a644a1bcd9511322908f2bcd40035090f 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eUR 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ae44fd7a2684108a644a1bcd9511322908f2bcd40035090f 2 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ae44fd7a2684108a644a1bcd9511322908f2bcd40035090f 2 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ae44fd7a2684108a644a1bcd9511322908f2bcd40035090f 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:42.056 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eUR 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eUR 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.eUR 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=198bfc933c6b7f341f7e4a797fe7bc8d 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1cv 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 198bfc933c6b7f341f7e4a797fe7bc8d 0 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 198bfc933c6b7f341f7e4a797fe7bc8d 0 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=198bfc933c6b7f341f7e4a797fe7bc8d 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1cv 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1cv 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.1cv 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7cb3a72c0f0dc5fb27fd691d19eae79ac62a66443533014d2788fee03be0083c 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dSl 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7cb3a72c0f0dc5fb27fd691d19eae79ac62a66443533014d2788fee03be0083c 3 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7cb3a72c0f0dc5fb27fd691d19eae79ac62a66443533014d2788fee03be0083c 3 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7cb3a72c0f0dc5fb27fd691d19eae79ac62a66443533014d2788fee03be0083c 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dSl 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dSl 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.dSl 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2398122 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2398122 ']' 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.315 10:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.574 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.574 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:42.574 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zeP 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wSr ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wSr 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.bA0 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.e3K ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.e3K 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0ls 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.612 ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.612 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.eUR 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.1cv ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.1cv 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.dSl 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.575 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:42.833 10:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:43.765 Waiting for block devices as requested 00:23:43.765 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:44.022 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:44.022 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:44.022 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:44.279 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:44.279 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:44.279 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:44.279 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:44.536 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:44.536 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:44.536 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:44.536 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:44.793 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:44.793 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:44.794 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:44.794 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:45.052 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:45.310 No valid GPT data, bailing 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:45.310 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:45.568 10:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:45.568 00:23:45.568 Discovery Log Number of Records 2, Generation counter 2 00:23:45.568 =====Discovery Log Entry 0====== 00:23:45.568 trtype: tcp 00:23:45.568 adrfam: ipv4 00:23:45.568 subtype: current discovery subsystem 00:23:45.568 treq: not specified, sq flow control disable supported 00:23:45.568 portid: 1 00:23:45.568 trsvcid: 4420 00:23:45.568 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:45.568 traddr: 10.0.0.1 00:23:45.568 eflags: none 00:23:45.568 sectype: none 00:23:45.568 =====Discovery Log Entry 1====== 00:23:45.568 trtype: tcp 00:23:45.568 adrfam: ipv4 00:23:45.568 subtype: nvme subsystem 00:23:45.568 treq: not specified, sq flow control disable supported 00:23:45.568 portid: 1 00:23:45.568 trsvcid: 4420 00:23:45.568 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:45.568 traddr: 10.0.0.1 00:23:45.568 eflags: none 00:23:45.568 sectype: none 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.568 nvme0n1 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.568 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.569 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.569 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.569 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.830 nvme0n1 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.830 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.089 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.090 nvme0n1 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.090 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.349 nvme0n1 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.349 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.350 10:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.609 nvme0n1 00:23:46.609 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.609 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.609 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.609 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.609 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.609 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.609 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.609 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.610 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.869 nvme0n1 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:46.869 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.870 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.129 nvme0n1 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.129 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.389 nvme0n1 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.389 10:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.647 nvme0n1 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.647 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.648 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.648 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.904 nvme0n1 00:23:47.904 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.904 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.905 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.163 nvme0n1 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.163 10:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.422 nvme0n1 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.422 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.681 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.940 nvme0n1 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.941 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.199 nvme0n1 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:49.199 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.200 10:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.458 nvme0n1 00:23:49.458 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.458 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.458 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.458 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.458 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.458 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.717 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.977 nvme0n1 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.977 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.546 nvme0n1 00:23:50.546 10:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.546 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.115 nvme0n1 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.115 10:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.683 nvme0n1 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.683 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.684 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:51.684 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.684 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.251 nvme0n1 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.251 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.252 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.252 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.252 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.252 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.252 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.252 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.252 10:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.252 10:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.252 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.252 10:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.819 nvme0n1 00:23:52.819 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.819 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.819 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.819 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.819 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.819 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.079 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.080 10:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.081 10:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.081 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.081 10:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.015 nvme0n1 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.015 10:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.970 nvme0n1 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.970 10:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.910 nvme0n1 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.910 10:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.847 nvme0n1 00:23:56.847 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.847 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.847 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.847 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.847 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.847 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.105 10:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.041 nvme0n1 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.041 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.042 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.303 nvme0n1 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.303 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.563 nvme0n1 00:23:58.563 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.563 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.563 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.563 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.563 10:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.563 10:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.563 nvme0n1 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.563 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:58.820 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.821 nvme0n1 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.821 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.104 nvme0n1 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:59.104 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.105 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.396 nvme0n1 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.396 10:36:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.655 nvme0n1 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.655 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.656 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.656 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.656 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.656 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.656 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.916 nvme0n1 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.916 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.181 nvme0n1 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.181 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.443 nvme0n1 00:24:00.443 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.443 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.443 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.443 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.443 10:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.443 10:36:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.443 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.703 nvme0n1 00:24:00.703 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.703 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.703 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.703 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.703 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.703 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.964 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.965 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.965 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.965 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.965 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.965 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.965 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.965 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.965 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.965 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.225 nvme0n1 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.225 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.226 10:36:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.486 nvme0n1 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.486 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.056 nvme0n1 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.056 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.057 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.057 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.057 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.057 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.057 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:02.057 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.057 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.318 nvme0n1 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.318 10:36:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.889 nvme0n1 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.889 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.458 nvme0n1 00:24:03.458 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.458 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.458 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.458 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.458 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.458 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.458 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.458 10:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.458 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.458 10:36:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.458 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.028 nvme0n1 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.028 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.029 10:36:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.596 nvme0n1 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.596 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.854 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:04.855 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.855 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.421 nvme0n1 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.421 10:36:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.359 nvme0n1 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.359 10:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.291 nvme0n1 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.291 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.548 10:37:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.485 nvme0n1 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.485 10:37:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.485 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.449 nvme0n1 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.449 10:37:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.386 nvme0n1 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.386 10:37:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.644 nvme0n1 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.644 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.929 nvme0n1 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.929 nvme0n1 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.929 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.188 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.188 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.188 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.188 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.188 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.188 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.188 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.188 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:11.188 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.188 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.189 nvme0n1 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.189 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.484 10:37:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.484 nvme0n1 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.484 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.744 nvme0n1 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.744 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.005 nvme0n1 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.005 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.265 nvme0n1 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.265 10:37:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.524 nvme0n1 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.524 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.821 nvme0n1 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:12.821 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.822 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 nvme0n1 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.361 10:37:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.622 nvme0n1 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.622 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.882 nvme0n1 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.882 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.141 nvme0n1 00:24:14.141 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.141 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.141 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.141 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.141 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.141 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.399 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.400 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.400 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.400 10:37:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.400 10:37:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.400 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.400 10:37:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.658 nvme0n1 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.658 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.227 nvme0n1 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.227 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.228 10:37:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.797 nvme0n1 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:15.797 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.798 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.367 nvme0n1 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.367 10:37:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.936 nvme0n1 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.936 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.937 10:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.505 nvme0n1 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzExN2M2NzExNDdkMTZmZGNhZTUzNDA3M2FlZmQxZjZcwjUN: 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: ]] 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNlZGQ3ZGI5OThlMzllYjVlOWNmNDYwMGFjNzkxMTU3ZDU2NjRkZjY5ODllMzZlMGRmYjQ1OGU1ODhkN2FiZrP6Oa8=: 00:24:17.505 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.506 10:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.883 nvme0n1 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.883 10:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.819 nvme0n1 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTdjMmRkMWQ2MGY2NDJjYzFiZjhjMzA5YjVjMDlkNjTAx10Y: 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: ]] 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzc1MjQ3YmU3OGEzYzA5M2RiYjQzM2E2NmFlZTQwYTIGvPBt: 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.819 10:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.758 nvme0n1 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU0NGZkN2EyNjg0MTA4YTY0NGExYmNkOTUxMTMyMjkwOGYyYmNkNDAwMzUwOTBmiUpz1A==: 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: ]] 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTk4YmZjOTMzYzZiN2YzNDFmN2U0YTc5N2ZlN2JjOGS05nGt: 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.758 10:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.694 nvme0n1 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2NiM2E3MmMwZjBkYzVmYjI3ZmQ2OTFkMTllYWU3OWFjNjJhNjY0NDM1MzMwMTRkMjc4OGZlZTAzYmUwMDgzYwT9mSI=: 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:21.694 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.695 10:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.070 nvme0n1 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWEwYTA5MGE3MmUyNGJmNWQ4N2YwOWU0ZjJjODBhNjQ0YjJmNDA1NTIyNjRmMDlh7euGkw==: 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTUwZjczZjA5NzI1ODQ5ZTJhNWVmNWE4MjBjMDQzODAwZjhjYTM3ZDAyMDg5Njk2V8LF+g==: 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.070 request: 00:24:23.070 { 00:24:23.070 "name": "nvme0", 00:24:23.070 "trtype": "tcp", 00:24:23.070 "traddr": "10.0.0.1", 00:24:23.070 "adrfam": "ipv4", 00:24:23.070 "trsvcid": "4420", 00:24:23.070 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:23.070 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:23.070 "prchk_reftag": false, 00:24:23.070 "prchk_guard": false, 00:24:23.070 "hdgst": false, 00:24:23.070 "ddgst": false, 00:24:23.070 "method": "bdev_nvme_attach_controller", 00:24:23.070 "req_id": 1 00:24:23.070 } 00:24:23.070 Got JSON-RPC error response 00:24:23.070 response: 00:24:23.070 { 00:24:23.070 "code": -5, 00:24:23.070 "message": "Input/output error" 00:24:23.070 } 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.070 request: 00:24:23.070 { 00:24:23.070 "name": "nvme0", 00:24:23.070 "trtype": "tcp", 00:24:23.070 "traddr": "10.0.0.1", 00:24:23.070 "adrfam": "ipv4", 00:24:23.070 "trsvcid": "4420", 00:24:23.070 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:23.070 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:23.070 "prchk_reftag": false, 00:24:23.070 "prchk_guard": false, 00:24:23.070 "hdgst": false, 00:24:23.070 "ddgst": false, 00:24:23.070 "dhchap_key": "key2", 00:24:23.070 "method": "bdev_nvme_attach_controller", 00:24:23.070 "req_id": 1 00:24:23.070 } 00:24:23.070 Got JSON-RPC error response 00:24:23.070 response: 00:24:23.070 { 00:24:23.070 "code": -5, 00:24:23.070 "message": "Input/output error" 00:24:23.070 } 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.070 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.071 request: 00:24:23.071 { 00:24:23.071 "name": "nvme0", 00:24:23.071 "trtype": "tcp", 00:24:23.071 "traddr": "10.0.0.1", 00:24:23.071 "adrfam": "ipv4", 00:24:23.071 "trsvcid": "4420", 00:24:23.071 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:23.071 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:23.071 "prchk_reftag": false, 00:24:23.071 "prchk_guard": false, 00:24:23.071 "hdgst": false, 00:24:23.071 "ddgst": false, 00:24:23.071 "dhchap_key": "key1", 00:24:23.071 "dhchap_ctrlr_key": "ckey2", 00:24:23.071 "method": "bdev_nvme_attach_controller", 00:24:23.071 "req_id": 1 00:24:23.071 } 00:24:23.071 Got JSON-RPC error response 00:24:23.071 response: 00:24:23.071 { 00:24:23.071 "code": -5, 00:24:23.071 "message": "Input/output error" 00:24:23.071 } 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.071 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.071 rmmod nvme_tcp 00:24:23.071 rmmod nvme_fabrics 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2398122 ']' 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2398122 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2398122 ']' 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2398122 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2398122 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2398122' 00:24:23.330 killing process with pid 2398122 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2398122 00:24:23.330 10:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2398122 00:24:23.589 10:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.589 10:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.589 10:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.589 10:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.589 10:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.589 10:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.589 10:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.589 10:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.493 10:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.493 10:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:25.493 10:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:25.493 10:37:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:25.493 10:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:25.494 10:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:25.494 10:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:25.494 10:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:25.494 10:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:25.494 10:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:25.494 10:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:25.494 10:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:25.494 10:37:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:26.869 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:26.869 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:26.869 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:26.869 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:26.869 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:26.869 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:26.869 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:26.869 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:26.869 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:26.869 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:26.869 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:26.869 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:26.869 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:26.869 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:26.869 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:26.869 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:27.863 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:27.863 10:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zeP /tmp/spdk.key-null.bA0 /tmp/spdk.key-sha256.0ls /tmp/spdk.key-sha384.eUR /tmp/spdk.key-sha512.dSl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:27.863 10:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:28.797 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:28.797 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:28.797 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:28.797 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:28.797 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:28.797 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:28.797 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:28.797 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:28.797 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:29.054 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:29.054 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:29.054 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:29.054 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:29.054 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:29.054 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:29.054 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:29.054 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:29.054 00:24:29.054 real 0m49.853s 00:24:29.054 user 0m47.843s 00:24:29.054 sys 0m5.684s 00:24:29.054 10:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:29.054 10:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.054 ************************************ 00:24:29.054 END TEST nvmf_auth_host 00:24:29.054 ************************************ 00:24:29.054 10:37:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:29.054 10:37:23 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:29.054 10:37:23 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:29.054 10:37:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:29.054 10:37:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.054 10:37:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:29.054 ************************************ 00:24:29.054 START TEST nvmf_digest 00:24:29.054 ************************************ 00:24:29.054 10:37:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:29.311 * Looking for test storage... 00:24:29.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:29.311 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.312 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:29.312 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:29.312 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:29.312 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.312 10:37:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.312 10:37:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.312 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:29.312 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:29.312 10:37:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:29.312 10:37:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:31.208 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.208 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.208 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:31.209 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:31.209 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:31.209 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:31.209 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.209 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.468 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.468 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.468 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:24:31.468 00:24:31.468 --- 10.0.0.2 ping statistics --- 00:24:31.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.469 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:24:31.469 00:24:31.469 --- 10.0.0.1 ping statistics --- 00:24:31.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.469 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:31.469 ************************************ 00:24:31.469 START TEST nvmf_digest_clean 00:24:31.469 ************************************ 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2407572 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2407572 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2407572 ']' 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.469 10:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:31.469 [2024-07-15 10:37:25.998286] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:31.469 [2024-07-15 10:37:25.998358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.469 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.469 [2024-07-15 10:37:26.066239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.727 [2024-07-15 10:37:26.185363] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.727 [2024-07-15 10:37:26.185425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.727 [2024-07-15 10:37:26.185449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.727 [2024-07-15 10:37:26.185463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.727 [2024-07-15 10:37:26.185475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.727 [2024-07-15 10:37:26.185506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.727 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:31.727 null0 00:24:31.727 [2024-07-15 10:37:26.362455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.985 [2024-07-15 10:37:26.386704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2407604 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2407604 /var/tmp/bperf.sock 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2407604 ']' 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:31.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.985 10:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:31.985 [2024-07-15 10:37:26.437811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:31.985 [2024-07-15 10:37:26.437912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407604 ] 00:24:31.985 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.985 [2024-07-15 10:37:26.505310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.985 [2024-07-15 10:37:26.626354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.919 10:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.919 10:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:32.919 10:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:32.919 10:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:32.919 10:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:33.177 10:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:33.177 10:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:33.435 nvme0n1 00:24:33.435 10:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:33.435 10:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:33.695 Running I/O for 2 seconds... 00:24:35.604 00:24:35.604 Latency(us) 00:24:35.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.604 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:35.604 nvme0n1 : 2.00 18912.78 73.88 0.00 0.00 6760.42 3349.62 15534.46 00:24:35.604 =================================================================================================================== 00:24:35.604 Total : 18912.78 73.88 0.00 0.00 6760.42 3349.62 15534.46 00:24:35.604 0 00:24:35.604 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:35.604 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:35.604 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:35.604 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:35.604 | select(.opcode=="crc32c") 00:24:35.604 | "\(.module_name) \(.executed)"' 00:24:35.604 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2407604 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2407604 ']' 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2407604 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2407604 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2407604' 00:24:35.863 killing process with pid 2407604 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2407604 00:24:35.863 Received shutdown signal, test time was about 2.000000 seconds 00:24:35.863 00:24:35.863 Latency(us) 00:24:35.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.863 =================================================================================================================== 00:24:35.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.863 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2407604 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2408136 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2408136 /var/tmp/bperf.sock 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2408136 ']' 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:36.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.121 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:36.121 [2024-07-15 10:37:30.721324] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:36.121 [2024-07-15 10:37:30.721416] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408136 ] 00:24:36.121 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:36.121 Zero copy mechanism will not be used. 00:24:36.121 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.379 [2024-07-15 10:37:30.779994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.379 [2024-07-15 10:37:30.887271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.379 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.379 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:36.379 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:36.379 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:36.379 10:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:36.638 10:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:36.638 10:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.207 nvme0n1 00:24:37.207 10:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:37.207 10:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:37.466 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:37.466 Zero copy mechanism will not be used. 00:24:37.466 Running I/O for 2 seconds... 00:24:39.369 00:24:39.369 Latency(us) 00:24:39.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.369 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:39.369 nvme0n1 : 2.01 3268.38 408.55 0.00 0.00 4891.20 1201.49 8204.14 00:24:39.369 =================================================================================================================== 00:24:39.369 Total : 3268.38 408.55 0.00 0.00 4891.20 1201.49 8204.14 00:24:39.369 0 00:24:39.369 10:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:39.369 10:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:39.369 10:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:39.369 10:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:39.369 10:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:39.369 | select(.opcode=="crc32c") 00:24:39.369 | "\(.module_name) \(.executed)"' 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2408136 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2408136 ']' 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2408136 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2408136 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2408136' 00:24:39.627 killing process with pid 2408136 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2408136 00:24:39.627 Received shutdown signal, test time was about 2.000000 seconds 00:24:39.627 00:24:39.627 Latency(us) 00:24:39.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.627 =================================================================================================================== 00:24:39.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.627 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2408136 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2408546 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2408546 /var/tmp/bperf.sock 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2408546 ']' 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.885 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:39.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:39.886 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.886 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:39.886 [2024-07-15 10:37:34.475642] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:39.886 [2024-07-15 10:37:34.475732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408546 ] 00:24:39.886 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.144 [2024-07-15 10:37:34.538626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.144 [2024-07-15 10:37:34.660109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.144 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.144 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:40.144 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:40.144 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:40.144 10:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:40.713 10:37:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:40.713 10:37:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:40.972 nvme0n1 00:24:40.972 10:37:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:40.972 10:37:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:41.230 Running I/O for 2 seconds... 00:24:43.139 00:24:43.139 Latency(us) 00:24:43.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.140 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:43.140 nvme0n1 : 2.00 19430.58 75.90 0.00 0.00 6576.76 3470.98 17476.27 00:24:43.140 =================================================================================================================== 00:24:43.140 Total : 19430.58 75.90 0.00 0.00 6576.76 3470.98 17476.27 00:24:43.140 0 00:24:43.140 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:43.140 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:43.140 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:43.140 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:43.140 | select(.opcode=="crc32c") 00:24:43.140 | "\(.module_name) \(.executed)"' 00:24:43.140 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2408546 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2408546 ']' 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2408546 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2408546 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2408546' 00:24:43.464 killing process with pid 2408546 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2408546 00:24:43.464 Received shutdown signal, test time was about 2.000000 seconds 00:24:43.464 00:24:43.464 Latency(us) 00:24:43.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.464 =================================================================================================================== 00:24:43.464 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.464 10:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2408546 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2409078 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2409078 /var/tmp/bperf.sock 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2409078 ']' 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:43.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.722 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:43.722 [2024-07-15 10:37:38.265141] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:43.722 [2024-07-15 10:37:38.265245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409078 ] 00:24:43.722 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:43.722 Zero copy mechanism will not be used. 00:24:43.722 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.722 [2024-07-15 10:37:38.323299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.980 [2024-07-15 10:37:38.431594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.980 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.980 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:43.980 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:43.980 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:43.980 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:44.238 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:44.238 10:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:44.806 nvme0n1 00:24:44.806 10:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:44.806 10:37:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:44.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:44.806 Zero copy mechanism will not be used. 00:24:44.806 Running I/O for 2 seconds... 00:24:47.341 00:24:47.341 Latency(us) 00:24:47.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.341 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:47.341 nvme0n1 : 2.01 2771.26 346.41 0.00 0.00 5760.09 2560.76 8009.96 00:24:47.341 =================================================================================================================== 00:24:47.341 Total : 2771.26 346.41 0.00 0.00 5760.09 2560.76 8009.96 00:24:47.341 0 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:47.341 | select(.opcode=="crc32c") 00:24:47.341 | "\(.module_name) \(.executed)"' 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2409078 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2409078 ']' 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2409078 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2409078 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2409078' 00:24:47.341 killing process with pid 2409078 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2409078 00:24:47.341 Received shutdown signal, test time was about 2.000000 seconds 00:24:47.341 00:24:47.341 Latency(us) 00:24:47.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.341 =================================================================================================================== 00:24:47.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2409078 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2407572 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2407572 ']' 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2407572 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.341 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2407572 00:24:47.600 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:47.600 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:47.600 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2407572' 00:24:47.600 killing process with pid 2407572 00:24:47.600 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2407572 00:24:47.600 10:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2407572 00:24:47.859 00:24:47.859 real 0m16.309s 00:24:47.859 user 0m32.690s 00:24:47.859 sys 0m4.048s 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:47.859 ************************************ 00:24:47.859 END TEST nvmf_digest_clean 00:24:47.859 ************************************ 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:47.859 ************************************ 00:24:47.859 START TEST nvmf_digest_error 00:24:47.859 ************************************ 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2409515 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2409515 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2409515 ']' 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.859 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:47.859 [2024-07-15 10:37:42.356932] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:47.859 [2024-07-15 10:37:42.357013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.859 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.859 [2024-07-15 10:37:42.419723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.118 [2024-07-15 10:37:42.525234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.118 [2024-07-15 10:37:42.525293] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.118 [2024-07-15 10:37:42.525306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.118 [2024-07-15 10:37:42.525316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.118 [2024-07-15 10:37:42.525325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.118 [2024-07-15 10:37:42.525351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:48.118 [2024-07-15 10:37:42.593935] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:48.118 null0 00:24:48.118 [2024-07-15 10:37:42.710937] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.118 [2024-07-15 10:37:42.735135] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2409652 00:24:48.118 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2409652 /var/tmp/bperf.sock 00:24:48.119 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:48.119 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2409652 ']' 00:24:48.119 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:48.119 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.119 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:48.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:48.119 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.119 10:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:48.379 [2024-07-15 10:37:42.782852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:48.379 [2024-07-15 10:37:42.782968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409652 ] 00:24:48.379 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.379 [2024-07-15 10:37:42.844067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.379 [2024-07-15 10:37:42.960113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.638 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.638 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:48.638 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:48.638 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:48.896 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:48.896 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.896 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:48.896 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.896 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.896 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:49.154 nvme0n1 00:24:49.154 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:49.154 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.154 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.154 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.154 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:49.154 10:37:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:49.412 Running I/O for 2 seconds... 00:24:49.412 [2024-07-15 10:37:43.893292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:43.893343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:43.893365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:43.907677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:43.907721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:43.907741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:43.921997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:43.922028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:43.922046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:43.933836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:43.933869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:43.933897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:43.949420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:43.949454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:43.949473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:43.962384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:43.962419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:43.962438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:43.978417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:43.978450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:43.978469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:43.990895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:43.990942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:43.990960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:44.003293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:44.003326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:44.003345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:44.017125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:44.017169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:44.017185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:44.032951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:44.032994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:44.033010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:44.044727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:44.044760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:44.044779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.412 [2024-07-15 10:37:44.059016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.412 [2024-07-15 10:37:44.059045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.412 [2024-07-15 10:37:44.059062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.671 [2024-07-15 10:37:44.074072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.671 [2024-07-15 10:37:44.074103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.671 [2024-07-15 10:37:44.074120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.087058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.087089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.087106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.099309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.099343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.099361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.113122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.113149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.113164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.127032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.127062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.127078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.138996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.139025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.139047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.154707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.154741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.154759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.167807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.167842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.167861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.183644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.183678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.183696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.198532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.198567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.198586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.211947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.211977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.211994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.225425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.225459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.225477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.238499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.238532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.238550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.251638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.251672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.251690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.266354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.266393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.266413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.281424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.281457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.281475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.293809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.293842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.293861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.672 [2024-07-15 10:37:44.310347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.672 [2024-07-15 10:37:44.310381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.672 [2024-07-15 10:37:44.310399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.323430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.323464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.323483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.336668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.336700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.336718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.348703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.348736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.348754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.362770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.362803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.362821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.378613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.378646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.378665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.390615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.390648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.390667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.405209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.405244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.405263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.417692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.417727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.417746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.434399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.434433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.434451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.447187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.447220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.447240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.461184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.461217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.461236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.475602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.475637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.475656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.488615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.488649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.488668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.502420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.502455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.502479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.514060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.514090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.514106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.529774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.529809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.529828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.541368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.933 [2024-07-15 10:37:44.541403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.933 [2024-07-15 10:37:44.541421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.933 [2024-07-15 10:37:44.557682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.934 [2024-07-15 10:37:44.557717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.934 [2024-07-15 10:37:44.557735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:49.934 [2024-07-15 10:37:44.569628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:49.934 [2024-07-15 10:37:44.569662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:49.934 [2024-07-15 10:37:44.569680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.586240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.586275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.586296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.602724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.602758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.602776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.614573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.614608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.614627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.629707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.629741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.629760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.644913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.644961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.644978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.657089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.657117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.657134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.671847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.671889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.671910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.685828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.685865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.685892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.699896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.699944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.699960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.712362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.712396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.712415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.727135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.727166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.727183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.740309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.740344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.740368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.752842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.752886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.752908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.766484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.766519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.766539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.781006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.781053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.781070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.795464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.795498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.795518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.808040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.808071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.808086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.821471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.821506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.821524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.194 [2024-07-15 10:37:44.833905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.194 [2024-07-15 10:37:44.833954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.194 [2024-07-15 10:37:44.833970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.455 [2024-07-15 10:37:44.847098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.455 [2024-07-15 10:37:44.847145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.455 [2024-07-15 10:37:44.847163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.455 [2024-07-15 10:37:44.860497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.455 [2024-07-15 10:37:44.860541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.455 [2024-07-15 10:37:44.860561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.455 [2024-07-15 10:37:44.873960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.455 [2024-07-15 10:37:44.873990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.455 [2024-07-15 10:37:44.874007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.455 [2024-07-15 10:37:44.888650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.455 [2024-07-15 10:37:44.888685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.455 [2024-07-15 10:37:44.888703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.455 [2024-07-15 10:37:44.901679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.455 [2024-07-15 10:37:44.901714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.455 [2024-07-15 10:37:44.901732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.455 [2024-07-15 10:37:44.913783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.455 [2024-07-15 10:37:44.913817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.455 [2024-07-15 10:37:44.913836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.455 [2024-07-15 10:37:44.927837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.455 [2024-07-15 10:37:44.927872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.455 [2024-07-15 10:37:44.927901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.455 [2024-07-15 10:37:44.941762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.455 [2024-07-15 10:37:44.941796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.455 [2024-07-15 10:37:44.941816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.455 [2024-07-15 10:37:44.956124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.455 [2024-07-15 10:37:44.956164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.455 [2024-07-15 10:37:44.956181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.455 [2024-07-15 10:37:44.968360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.455 [2024-07-15 10:37:44.968394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.455 [2024-07-15 10:37:44.968413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.456 [2024-07-15 10:37:44.981606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.456 [2024-07-15 10:37:44.981640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-15 10:37:44.981659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.456 [2024-07-15 10:37:44.996364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.456 [2024-07-15 10:37:44.996399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-15 10:37:44.996418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.456 [2024-07-15 10:37:45.008947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.456 [2024-07-15 10:37:45.008975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-15 10:37:45.008990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.456 [2024-07-15 10:37:45.023917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.456 [2024-07-15 10:37:45.023959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-15 10:37:45.023976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.456 [2024-07-15 10:37:45.039059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.456 [2024-07-15 10:37:45.039088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-15 10:37:45.039104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.456 [2024-07-15 10:37:45.052143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.456 [2024-07-15 10:37:45.052174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-15 10:37:45.052207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.456 [2024-07-15 10:37:45.064974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.456 [2024-07-15 10:37:45.065003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-15 10:37:45.065019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.456 [2024-07-15 10:37:45.078590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.456 [2024-07-15 10:37:45.078623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-15 10:37:45.078642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.456 [2024-07-15 10:37:45.093166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.456 [2024-07-15 10:37:45.093210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-15 10:37:45.093230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.108783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.108818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.108837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.121969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.122000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.122017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.139192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.139238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.139258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.150242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.150276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.150295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.165628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.165663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.165682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.181285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.181321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.181340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.192651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.192686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.192705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.209049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.209096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.209114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.219854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.219902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.219944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.236694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.236729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.236747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.252366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.252401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.252420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.263845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.263888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.263909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.281475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.281510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.281530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.292259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.292293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.292311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.308906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.308951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.308968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.324301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.324335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.324352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.336444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.336478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.336497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.351155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.351196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.351230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.716 [2024-07-15 10:37:45.363228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.716 [2024-07-15 10:37:45.363275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.716 [2024-07-15 10:37:45.363294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.977 [2024-07-15 10:37:45.377896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.977 [2024-07-15 10:37:45.377944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.977 [2024-07-15 10:37:45.377961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.977 [2024-07-15 10:37:45.391987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.977 [2024-07-15 10:37:45.392016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.977 [2024-07-15 10:37:45.392032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.977 [2024-07-15 10:37:45.404040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.977 [2024-07-15 10:37:45.404069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.977 [2024-07-15 10:37:45.404085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.977 [2024-07-15 10:37:45.418873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.977 [2024-07-15 10:37:45.418911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.418929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.429346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.429375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.429391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.445382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.445411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.445427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.458385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.458415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.458441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.468744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.468773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.468789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.482353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.482381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.482396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.493200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.493242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.493258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.507309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.507338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.507353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.520645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.520674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.520691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.532828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.532859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.532883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.545219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.545247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.545264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.558451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.558481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.558497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.572379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.572410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.572427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.583204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.583232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.583248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.597221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.597250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.597266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.610383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.610412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.610428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.978 [2024-07-15 10:37:45.622593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:50.978 [2024-07-15 10:37:45.622639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.978 [2024-07-15 10:37:45.622656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.634390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.634419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.634435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.648615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.648643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.648658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.661478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.661508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.661524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.677444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.677474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.677498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.692439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.692471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.692488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.704245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.704274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.704290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.716295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.716340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.716357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.728930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.728972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.728988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.740097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.740125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.740141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.755429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.755460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.755477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.769743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.769773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.769791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.782237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.782282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.782300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.239 [2024-07-15 10:37:45.794419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.239 [2024-07-15 10:37:45.794455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.239 [2024-07-15 10:37:45.794473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.240 [2024-07-15 10:37:45.807681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.240 [2024-07-15 10:37:45.807727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.240 [2024-07-15 10:37:45.807744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.240 [2024-07-15 10:37:45.818764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.240 [2024-07-15 10:37:45.818791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.240 [2024-07-15 10:37:45.818807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.240 [2024-07-15 10:37:45.832441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.240 [2024-07-15 10:37:45.832472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.240 [2024-07-15 10:37:45.832489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.240 [2024-07-15 10:37:45.845390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.240 [2024-07-15 10:37:45.845421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.240 [2024-07-15 10:37:45.845438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.240 [2024-07-15 10:37:45.858155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.240 [2024-07-15 10:37:45.858185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.240 [2024-07-15 10:37:45.858203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.240 [2024-07-15 10:37:45.871614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcb9d50) 00:24:51.240 [2024-07-15 10:37:45.871645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.240 [2024-07-15 10:37:45.871662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:51.240 00:24:51.240 Latency(us) 00:24:51.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.240 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:51.240 nvme0n1 : 2.00 18672.21 72.94 0.00 0.00 6844.84 3543.80 18835.53 00:24:51.240 =================================================================================================================== 00:24:51.240 Total : 18672.21 72.94 0.00 0.00 6844.84 3543.80 18835.53 00:24:51.240 0 00:24:51.498 10:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:51.498 10:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:51.498 10:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:51.498 | .driver_specific 00:24:51.498 | .nvme_error 00:24:51.498 | .status_code 00:24:51.498 | .command_transient_transport_error' 00:24:51.498 10:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:51.758 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:24:51.758 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2409652 00:24:51.758 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2409652 ']' 00:24:51.758 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2409652 00:24:51.758 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:51.758 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.758 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2409652 00:24:51.758 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:51.758 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:51.758 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2409652' 00:24:51.759 killing process with pid 2409652 00:24:51.759 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2409652 00:24:51.759 Received shutdown signal, test time was about 2.000000 seconds 00:24:51.759 00:24:51.759 Latency(us) 00:24:51.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.759 =================================================================================================================== 00:24:51.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.759 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2409652 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2410066 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2410066 /var/tmp/bperf.sock 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2410066 ']' 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:52.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.017 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:52.017 [2024-07-15 10:37:46.511888] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:52.017 [2024-07-15 10:37:46.511983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410066 ] 00:24:52.017 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:52.017 Zero copy mechanism will not be used. 00:24:52.017 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.017 [2024-07-15 10:37:46.573281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.274 [2024-07-15 10:37:46.687043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.274 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.274 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:52.274 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:52.274 10:37:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:52.532 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:52.532 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.532 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:52.532 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.532 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:52.532 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.095 nvme0n1 00:24:53.095 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:53.095 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.095 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:53.095 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.095 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:53.095 10:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:53.095 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:53.095 Zero copy mechanism will not be used. 00:24:53.095 Running I/O for 2 seconds... 00:24:53.095 [2024-07-15 10:37:47.676980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.096 [2024-07-15 10:37:47.677029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.096 [2024-07-15 10:37:47.677058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.096 [2024-07-15 10:37:47.686743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.096 [2024-07-15 10:37:47.686778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.096 [2024-07-15 10:37:47.686800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.096 [2024-07-15 10:37:47.696247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.096 [2024-07-15 10:37:47.696280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.096 [2024-07-15 10:37:47.696302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.096 [2024-07-15 10:37:47.705687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.096 [2024-07-15 10:37:47.705720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.096 [2024-07-15 10:37:47.705746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.096 [2024-07-15 10:37:47.714970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.096 [2024-07-15 10:37:47.714997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.096 [2024-07-15 10:37:47.715018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.096 [2024-07-15 10:37:47.724364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.096 [2024-07-15 10:37:47.724396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.096 [2024-07-15 10:37:47.724421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.096 [2024-07-15 10:37:47.733587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.096 [2024-07-15 10:37:47.733620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.096 [2024-07-15 10:37:47.733646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.096 [2024-07-15 10:37:47.742890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.096 [2024-07-15 10:37:47.742936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.096 [2024-07-15 10:37:47.742962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.752499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.752531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.752555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.761802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.761835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.761864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.771119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.771148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.771188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.780480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.780512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.780541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.789801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.789834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.789859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.799101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.799128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.799146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.808459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.808491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.808513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.818043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.818072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.818090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.827455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.827487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.827506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.836940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.836969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.837000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.846253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.846286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.846309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.855497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.855529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.855548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.864891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.864938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.864955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.874299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.874331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.354 [2024-07-15 10:37:47.874356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.354 [2024-07-15 10:37:47.883874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.354 [2024-07-15 10:37:47.883928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.883944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.893367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.893399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.893419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.903313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.903346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.903369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.912858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.912933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.912952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.922198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.922230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.922254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.931483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.931516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.931534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.940980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.941010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.941035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.950290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.950322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.950341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.959554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.959587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.959605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.968947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.968974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.968993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.978189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.978215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.978249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.987518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.987551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.987571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.355 [2024-07-15 10:37:47.996884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.355 [2024-07-15 10:37:47.996931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.355 [2024-07-15 10:37:47.996948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.614 [2024-07-15 10:37:48.006835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.614 [2024-07-15 10:37:48.006868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.614 [2024-07-15 10:37:48.006900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.614 [2024-07-15 10:37:48.016296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.016329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.016351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.025608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.025647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.025666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.034975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.035002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.035021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.044407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.044440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.044458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.053823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.053857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.053886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.063150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.063176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.063220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.072434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.072466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.072485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.081738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.081771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.081789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.091011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.091039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.091059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.100268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.100301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.100320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.109631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.109663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.109681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.118940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.118968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.118984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.128234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.128268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.128286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.137495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.137528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.137545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.146937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.146965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.146980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.156179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.156223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.156241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.165470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.165503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.165521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.174868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.174923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.174939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.184206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.184233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.184272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.193493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.193528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.193548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.202826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.202859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.202885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.212114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.212143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.212160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.221414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.221447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.221465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.230830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.230864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.230891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.240202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.240236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.240254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.249442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.249474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.249493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.615 [2024-07-15 10:37:48.258869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.615 [2024-07-15 10:37:48.258924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.615 [2024-07-15 10:37:48.258942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.876 [2024-07-15 10:37:48.268197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.876 [2024-07-15 10:37:48.268226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.876 [2024-07-15 10:37:48.268241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.876 [2024-07-15 10:37:48.277342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.876 [2024-07-15 10:37:48.277374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.876 [2024-07-15 10:37:48.277392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.876 [2024-07-15 10:37:48.286609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.876 [2024-07-15 10:37:48.286642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.876 [2024-07-15 10:37:48.286660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.876 [2024-07-15 10:37:48.295944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.876 [2024-07-15 10:37:48.295975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.876 [2024-07-15 10:37:48.295992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.876 [2024-07-15 10:37:48.305348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.876 [2024-07-15 10:37:48.305380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.876 [2024-07-15 10:37:48.305399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.876 [2024-07-15 10:37:48.314611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.876 [2024-07-15 10:37:48.314643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.876 [2024-07-15 10:37:48.314661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.876 [2024-07-15 10:37:48.323895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.876 [2024-07-15 10:37:48.323954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.876 [2024-07-15 10:37:48.323971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.876 [2024-07-15 10:37:48.332568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.876 [2024-07-15 10:37:48.332599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.876 [2024-07-15 10:37:48.332617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.876 [2024-07-15 10:37:48.341972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.876 [2024-07-15 10:37:48.342000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.876 [2024-07-15 10:37:48.342021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.876 [2024-07-15 10:37:48.351289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.351330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.351349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.360593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.360626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.360643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.370088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.370116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.370132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.379523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.379555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.379573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.388870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.388925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.388943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.397553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.397600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.397616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.406326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.406354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.406370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.414786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.414813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.414845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.423405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.423438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.423470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.432416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.432455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.432485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.441750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.441782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.441800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.451151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.451195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.451211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.460607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.460639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.460657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.470037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.470069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.470086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.479670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.479702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.479720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.489058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.489086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.489118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.498460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.498503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.498524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.507956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.507984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.508015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:53.877 [2024-07-15 10:37:48.517321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:53.877 [2024-07-15 10:37:48.517354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.877 [2024-07-15 10:37:48.517372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.526954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.526983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.527015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.536469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.536502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.536520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.545921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.545964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.545980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.555492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.555525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.555542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.564873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.564927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.564942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.574167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.574210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.574229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.583454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.583485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.583509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.592749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.592780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.592798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.602247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.602278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.602296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.611559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.611589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.611606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.621217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.621249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.621266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.631161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.631207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.631226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.642563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.642596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.642614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.654090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.654120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.139 [2024-07-15 10:37:48.654136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.139 [2024-07-15 10:37:48.663946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.139 [2024-07-15 10:37:48.663974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.664006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.140 [2024-07-15 10:37:48.675009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.140 [2024-07-15 10:37:48.675043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.675076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.140 [2024-07-15 10:37:48.686566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.140 [2024-07-15 10:37:48.686600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.686619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.140 [2024-07-15 10:37:48.698116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.140 [2024-07-15 10:37:48.698146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.698177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.140 [2024-07-15 10:37:48.709570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.140 [2024-07-15 10:37:48.709605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.709625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.140 [2024-07-15 10:37:48.721855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.140 [2024-07-15 10:37:48.721899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.721933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.140 [2024-07-15 10:37:48.732538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.140 [2024-07-15 10:37:48.732572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.732591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.140 [2024-07-15 10:37:48.744279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.140 [2024-07-15 10:37:48.744313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.744331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.140 [2024-07-15 10:37:48.755787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.140 [2024-07-15 10:37:48.755822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.755840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.140 [2024-07-15 10:37:48.767287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.140 [2024-07-15 10:37:48.767321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.767348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.140 [2024-07-15 10:37:48.778616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.140 [2024-07-15 10:37:48.778649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.140 [2024-07-15 10:37:48.778668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.790102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.790133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.790150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.800303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.800338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.800356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.811321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.811355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.811374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.822479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.822513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.822532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.833982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.834011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.834026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.844670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.844703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.844721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.854494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.854528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.854546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.864439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.864478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.864498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.875765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.875800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.875818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.886177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.886224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.886243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.896489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.896522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.896540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.906419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.906451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.906469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.916201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.916247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.916265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.927732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.927765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.401 [2024-07-15 10:37:48.927784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.401 [2024-07-15 10:37:48.938821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.401 [2024-07-15 10:37:48.938854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:48.938872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:48.948282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:48.948313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:48.948331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:48.957930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:48.957960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:48.957991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:48.967873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:48.967927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:48.967943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:48.977306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:48.977339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:48.977358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:48.986579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:48.986611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:48.986629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:48.995955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:48.995982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:48.996013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:49.005276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:49.005306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:49.005324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:49.014488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:49.014519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:49.014537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:49.024086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:49.024114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:49.024145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:49.033489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:49.033519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:49.033542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.402 [2024-07-15 10:37:49.042889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.402 [2024-07-15 10:37:49.042934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.402 [2024-07-15 10:37:49.042949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.052100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.052127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.052158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.061359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.061391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.061409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.070626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.070659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.070676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.080019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.080046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.080076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.089317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.089349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.089366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.098633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.098664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.098681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.108149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.108194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.108212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.117474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.117512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.117531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.126807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.126838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.126856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.136141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.136167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.136182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.145559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.145591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.145608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.154955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.154982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.155014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.164292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.164324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.164341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.173508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.173540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.173558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.182821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.182852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.182869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.192108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.192135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.192165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.201362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.201393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.201411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.210585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.210618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.210636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.220050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.220077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.220108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.229278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.229309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.229327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.238595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.238626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.238643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.247978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.248006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.248038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.257098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.257128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.257144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.266370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.266402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.266419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.275633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.275664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.275688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.285082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.662 [2024-07-15 10:37:49.285110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.662 [2024-07-15 10:37:49.285141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.662 [2024-07-15 10:37:49.294449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.663 [2024-07-15 10:37:49.294480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.663 [2024-07-15 10:37:49.294498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.663 [2024-07-15 10:37:49.303870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.663 [2024-07-15 10:37:49.303925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.663 [2024-07-15 10:37:49.303940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.921 [2024-07-15 10:37:49.313193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.921 [2024-07-15 10:37:49.313225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.921 [2024-07-15 10:37:49.313243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.921 [2024-07-15 10:37:49.322448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.921 [2024-07-15 10:37:49.322479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.921 [2024-07-15 10:37:49.322497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.332076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.332104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.332136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.342146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.342174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.342189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.351790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.351821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.351839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.361191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.361231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.361246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.370937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.370964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.370994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.380326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.380358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.380375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.389664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.389696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.389713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.399187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.399214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.399247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.408461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.408493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.408510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.417859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.417899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.417932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.427200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.427245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.427264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.436533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.436565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.436589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.445854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.445894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.445914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.455142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.455185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.455200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.464563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.464597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.464616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.473942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.473973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.473989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.483283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.483316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.483333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.492024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.492052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.492084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.500550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.500578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.500609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.509065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.509091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.509123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.517760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.517794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.517825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.526327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.526354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.526385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.535021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.535065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.535081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.543654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.543695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.543711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.552781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.552812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.552830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:54.922 [2024-07-15 10:37:49.561933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:54.922 [2024-07-15 10:37:49.561964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.922 [2024-07-15 10:37:49.561981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.571215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.571247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.571266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.580608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.580639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.580657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.589924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.589952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.589967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.599235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.599267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.599284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.608661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.608692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.608710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.618083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.618110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.618140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.627490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.627522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.627539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.636844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.636884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.636918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.646231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.646263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.646280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.655644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.655676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.655693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.209 [2024-07-15 10:37:49.664945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc584f0) 00:24:55.209 [2024-07-15 10:37:49.664973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.209 [2024-07-15 10:37:49.665003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.209 00:24:55.209 Latency(us) 00:24:55.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.209 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:55.209 nvme0n1 : 2.00 3243.50 405.44 0.00 0.00 4928.54 4126.34 12087.75 00:24:55.209 =================================================================================================================== 00:24:55.209 Total : 3243.50 405.44 0.00 0.00 4928.54 4126.34 12087.75 00:24:55.209 0 00:24:55.209 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:55.209 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:55.209 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:55.209 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:55.209 | .driver_specific 00:24:55.209 | .nvme_error 00:24:55.209 | .status_code 00:24:55.209 | .command_transient_transport_error' 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2410066 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2410066 ']' 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2410066 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2410066 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2410066' 00:24:55.469 killing process with pid 2410066 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2410066 00:24:55.469 Received shutdown signal, test time was about 2.000000 seconds 00:24:55.469 00:24:55.469 Latency(us) 00:24:55.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.469 =================================================================================================================== 00:24:55.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.469 10:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2410066 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2410470 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2410470 /var/tmp/bperf.sock 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2410470 ']' 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:55.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.728 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:55.728 [2024-07-15 10:37:50.275761] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:55.728 [2024-07-15 10:37:50.275839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410470 ] 00:24:55.728 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.728 [2024-07-15 10:37:50.341144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.985 [2024-07-15 10:37:50.455220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.985 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.985 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:55.985 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:55.985 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:56.255 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:56.255 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.255 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:56.255 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.255 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.255 10:37:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.857 nvme0n1 00:24:56.857 10:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:56.857 10:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.857 10:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:56.857 10:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.857 10:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:56.857 10:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:56.857 Running I/O for 2 seconds... 00:24:56.857 [2024-07-15 10:37:51.384512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ee190 00:24:56.857 [2024-07-15 10:37:51.385612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.385654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.857 [2024-07-15 10:37:51.395885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fa7d8 00:24:56.857 [2024-07-15 10:37:51.396808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.396841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:56.857 [2024-07-15 10:37:51.408261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e38d0 00:24:56.857 [2024-07-15 10:37:51.409367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.409395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:56.857 [2024-07-15 10:37:51.420653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e2c28 00:24:56.857 [2024-07-15 10:37:51.422002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.422032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:56.857 [2024-07-15 10:37:51.431566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e8088 00:24:56.857 [2024-07-15 10:37:51.432424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.432451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:56.857 [2024-07-15 10:37:51.442430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190de038 00:24:56.857 [2024-07-15 10:37:51.443248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.443276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:56.857 [2024-07-15 10:37:51.454636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190de8a8 00:24:56.857 [2024-07-15 10:37:51.455612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.455639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:56.857 [2024-07-15 10:37:51.466808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f8e88 00:24:56.857 [2024-07-15 10:37:51.467999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.468027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:56.857 [2024-07-15 10:37:51.479115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fa3a0 00:24:56.857 [2024-07-15 10:37:51.480429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.480457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:56.857 [2024-07-15 10:37:51.491522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f0bc0 00:24:56.857 [2024-07-15 10:37:51.492964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.492992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:56.857 [2024-07-15 10:37:51.503826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190de8a8 00:24:56.857 [2024-07-15 10:37:51.505486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.857 [2024-07-15 10:37:51.505514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.516168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f3e60 00:24:57.116 [2024-07-15 10:37:51.517895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.517923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.528467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f8a50 00:24:57.116 [2024-07-15 10:37:51.530371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.530398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.536728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190feb58 00:24:57.116 [2024-07-15 10:37:51.537613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.537640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.548759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190de038 00:24:57.116 [2024-07-15 10:37:51.549589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.549615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.560759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190eee38 00:24:57.116 [2024-07-15 10:37:51.561707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.561734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.571813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e0ea0 00:24:57.116 [2024-07-15 10:37:51.572827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.572868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.583939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fa3a0 00:24:57.116 [2024-07-15 10:37:51.585031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.585059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.596102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f8e88 00:24:57.116 [2024-07-15 10:37:51.597389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.597415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.608323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f0ff8 00:24:57.116 [2024-07-15 10:37:51.609777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.609804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.620524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e0ea0 00:24:57.116 [2024-07-15 10:37:51.622205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.622231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.631626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ec408 00:24:57.116 [2024-07-15 10:37:51.632726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.632752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.643662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f6cc8 00:24:57.116 [2024-07-15 10:37:51.644725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.644753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.657264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fb8b8 00:24:57.116 [2024-07-15 10:37:51.659148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.659191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.665600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ee5c8 00:24:57.116 [2024-07-15 10:37:51.666493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.666520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.677770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f6cc8 00:24:57.116 [2024-07-15 10:37:51.678623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.678651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.690036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e8088 00:24:57.116 [2024-07-15 10:37:51.691014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.691043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.702017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e9e10 00:24:57.116 [2024-07-15 10:37:51.703041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.703070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.715334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e38d0 00:24:57.116 [2024-07-15 10:37:51.716933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.716961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.727691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f0ff8 00:24:57.116 [2024-07-15 10:37:51.729478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.729505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.739955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fe2e8 00:24:57.116 [2024-07-15 10:37:51.741763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.741790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.748272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ff3c8 00:24:57.116 [2024-07-15 10:37:51.749132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.749160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:57.116 [2024-07-15 10:37:51.759362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e88f8 00:24:57.116 [2024-07-15 10:37:51.760228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.116 [2024-07-15 10:37:51.760255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:57.376 [2024-07-15 10:37:51.771816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fd208 00:24:57.376 [2024-07-15 10:37:51.772762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.376 [2024-07-15 10:37:51.772789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:57.376 [2024-07-15 10:37:51.784136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e1710 00:24:57.376 [2024-07-15 10:37:51.785246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.376 [2024-07-15 10:37:51.785273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:57.376 [2024-07-15 10:37:51.796317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f2948 00:24:57.376 [2024-07-15 10:37:51.797618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.376 [2024-07-15 10:37:51.797646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:57.376 [2024-07-15 10:37:51.808454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e27f0 00:24:57.376 [2024-07-15 10:37:51.809848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.376 [2024-07-15 10:37:51.809907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:57.376 [2024-07-15 10:37:51.820723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fd208 00:24:57.376 [2024-07-15 10:37:51.822289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.822316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.833137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fa7d8 00:24:57.377 [2024-07-15 10:37:51.834845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.834872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.844350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e5a90 00:24:57.377 [2024-07-15 10:37:51.845755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.845782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.855121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e6fa8 00:24:57.377 [2024-07-15 10:37:51.856384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.856411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.866517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fc128 00:24:57.377 [2024-07-15 10:37:51.867312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.867341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.878677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f46d0 00:24:57.377 [2024-07-15 10:37:51.879618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.879646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.890904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e49b0 00:24:57.377 [2024-07-15 10:37:51.892100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.892128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.901751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f96f8 00:24:57.377 [2024-07-15 10:37:51.903740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.903767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.912955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f6cc8 00:24:57.377 [2024-07-15 10:37:51.913933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.913962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.925066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fac10 00:24:57.377 [2024-07-15 10:37:51.926191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.926219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.938512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190eff18 00:24:57.377 [2024-07-15 10:37:51.940186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.940223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.950717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fcdd0 00:24:57.377 [2024-07-15 10:37:51.952611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.952637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.961664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e5ec8 00:24:57.377 [2024-07-15 10:37:51.963024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.963052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.973413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fa7d8 00:24:57.377 [2024-07-15 10:37:51.974761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.974788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.986815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f4f40 00:24:57.377 [2024-07-15 10:37:51.988732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.988759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:51.995130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fb480 00:24:57.377 [2024-07-15 10:37:51.996029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:51.996056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:52.007018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190eaef0 00:24:57.377 [2024-07-15 10:37:52.007910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:52.007937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.377 [2024-07-15 10:37:52.019060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190de8a8 00:24:57.377 [2024-07-15 10:37:52.020142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.377 [2024-07-15 10:37:52.020169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.031163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e8d30 00:24:57.637 [2024-07-15 10:37:52.032299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.032327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.042761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f35f0 00:24:57.637 [2024-07-15 10:37:52.043813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.043840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.054380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e1f80 00:24:57.637 [2024-07-15 10:37:52.055483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.055510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.066391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ee190 00:24:57.637 [2024-07-15 10:37:52.067630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.067656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.077436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e0a68 00:24:57.637 [2024-07-15 10:37:52.078638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.078665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.089823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e73e0 00:24:57.637 [2024-07-15 10:37:52.091127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.091155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.101946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ddc00 00:24:57.637 [2024-07-15 10:37:52.103458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.103484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.112864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f0350 00:24:57.637 [2024-07-15 10:37:52.113992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.114025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.123462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e0ea0 00:24:57.637 [2024-07-15 10:37:52.124606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.124632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.134507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f6890 00:24:57.637 [2024-07-15 10:37:52.135322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.135348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.146218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ed4e8 00:24:57.637 [2024-07-15 10:37:52.147068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.147096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.158386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ed0b0 00:24:57.637 [2024-07-15 10:37:52.159136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.159163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.172904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fe2e8 00:24:57.637 [2024-07-15 10:37:52.174651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.637 [2024-07-15 10:37:52.174683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.637 [2024-07-15 10:37:52.186120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fdeb0 00:24:57.637 [2024-07-15 10:37:52.188085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.638 [2024-07-15 10:37:52.188111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.638 [2024-07-15 10:37:52.199437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e4578 00:24:57.638 [2024-07-15 10:37:52.201536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.638 [2024-07-15 10:37:52.201567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.638 [2024-07-15 10:37:52.208469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ed4e8 00:24:57.638 [2024-07-15 10:37:52.209370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.638 [2024-07-15 10:37:52.209401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.638 [2024-07-15 10:37:52.220432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ebb98 00:24:57.638 [2024-07-15 10:37:52.221338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.638 [2024-07-15 10:37:52.221368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:57.638 [2024-07-15 10:37:52.233784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ea680 00:24:57.638 [2024-07-15 10:37:52.234854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.638 [2024-07-15 10:37:52.234892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:57.638 [2024-07-15 10:37:52.246993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fcdd0 00:24:57.638 [2024-07-15 10:37:52.248265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.638 [2024-07-15 10:37:52.248295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:57.638 [2024-07-15 10:37:52.260358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e7c50 00:24:57.638 [2024-07-15 10:37:52.261773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.638 [2024-07-15 10:37:52.261803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:57.638 [2024-07-15 10:37:52.273665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ec408 00:24:57.638 [2024-07-15 10:37:52.275250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.638 [2024-07-15 10:37:52.275280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.287088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ea680 00:24:57.897 [2024-07-15 10:37:52.288859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.288898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.300362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:57.897 [2024-07-15 10:37:52.302304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.302335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.313741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fb480 00:24:57.897 [2024-07-15 10:37:52.315837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.315867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.322710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f4f40 00:24:57.897 [2024-07-15 10:37:52.323651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.323681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.336003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e3060 00:24:57.897 [2024-07-15 10:37:52.337114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.337141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.349406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fd640 00:24:57.897 [2024-07-15 10:37:52.350658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.350689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.362590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f46d0 00:24:57.897 [2024-07-15 10:37:52.364062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.364088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.375482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fc560 00:24:57.897 [2024-07-15 10:37:52.376957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.376984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.388715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f57b0 00:24:57.897 [2024-07-15 10:37:52.390326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.390357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.399449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f1ca0 00:24:57.897 [2024-07-15 10:37:52.400229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.400260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.412768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fcdd0 00:24:57.897 [2024-07-15 10:37:52.413691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.413722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.897 [2024-07-15 10:37:52.427340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f3a28 00:24:57.897 [2024-07-15 10:37:52.429325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.897 [2024-07-15 10:37:52.429356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:57.898 [2024-07-15 10:37:52.440734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e1710 00:24:57.898 [2024-07-15 10:37:52.442854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.898 [2024-07-15 10:37:52.442896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:57.898 [2024-07-15 10:37:52.449811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e8d30 00:24:57.898 [2024-07-15 10:37:52.450718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.898 [2024-07-15 10:37:52.450748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:57.898 [2024-07-15 10:37:52.464349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fe2e8 00:24:57.898 [2024-07-15 10:37:52.465959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.898 [2024-07-15 10:37:52.465985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:57.898 [2024-07-15 10:37:52.477680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e1f80 00:24:57.898 [2024-07-15 10:37:52.479495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.898 [2024-07-15 10:37:52.479526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:57.898 [2024-07-15 10:37:52.490973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e01f8 00:24:57.898 [2024-07-15 10:37:52.492946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.898 [2024-07-15 10:37:52.492977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:57.898 [2024-07-15 10:37:52.504199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e95a0 00:24:57.898 [2024-07-15 10:37:52.506345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.898 [2024-07-15 10:37:52.506376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:57.898 [2024-07-15 10:37:52.513139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e88f8 00:24:57.898 [2024-07-15 10:37:52.514192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.898 [2024-07-15 10:37:52.514222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:57.898 [2024-07-15 10:37:52.525166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190df988 00:24:57.898 [2024-07-15 10:37:52.526204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.898 [2024-07-15 10:37:52.526234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:57.898 [2024-07-15 10:37:52.538430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190eea00 00:24:57.898 [2024-07-15 10:37:52.539541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.898 [2024-07-15 10:37:52.539571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.551844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f2510 00:24:58.158 [2024-07-15 10:37:52.553145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.553187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.566043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fdeb0 00:24:58.158 [2024-07-15 10:37:52.567539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.567569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.579169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f57b0 00:24:58.158 [2024-07-15 10:37:52.580818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.580848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.589936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e9168 00:24:58.158 [2024-07-15 10:37:52.590712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.590742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.603193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ec840 00:24:58.158 [2024-07-15 10:37:52.604164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.604210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.616336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f6458 00:24:58.158 [2024-07-15 10:37:52.617474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.617506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.630842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e1710 00:24:58.158 [2024-07-15 10:37:52.632998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.633025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.639825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ed0b0 00:24:58.158 [2024-07-15 10:37:52.640780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.640810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.651761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e27f0 00:24:58.158 [2024-07-15 10:37:52.652708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.652738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.665032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fc560 00:24:58.158 [2024-07-15 10:37:52.666314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.666345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.678410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ff3c8 00:24:58.158 [2024-07-15 10:37:52.679710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.679742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.691754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190e4de8 00:24:58.158 [2024-07-15 10:37:52.693242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.693273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.703600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190efae0 00:24:58.158 [2024-07-15 10:37:52.704561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.704592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.716428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190fac10 00:24:58.158 [2024-07-15 10:37:52.717240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.717271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.729657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190f9f68 00:24:58.158 [2024-07-15 10:37:52.730622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.730652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.742759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.158 [2024-07-15 10:37:52.743080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.743108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.756871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.158 [2024-07-15 10:37:52.757200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.757230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.158 [2024-07-15 10:37:52.770967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.158 [2024-07-15 10:37:52.771273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.158 [2024-07-15 10:37:52.771308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.159 [2024-07-15 10:37:52.785051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.159 [2024-07-15 10:37:52.785364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.159 [2024-07-15 10:37:52.785394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.159 [2024-07-15 10:37:52.799086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.159 [2024-07-15 10:37:52.799406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.159 [2024-07-15 10:37:52.799435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.813084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.813416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.813445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.827151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.827475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.827505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.841315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.841625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.841655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.855325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.855648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.855678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.869528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.869849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.869887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.883648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.883968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.883997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.897862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.898291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.898321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.912142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.912468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.912498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.926305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.926620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.926650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.940254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.940564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.940596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.954214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.954529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.954560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.968031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.968347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.968377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.418 [2024-07-15 10:37:52.981986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.418 [2024-07-15 10:37:52.982294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.418 [2024-07-15 10:37:52.982323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.419 [2024-07-15 10:37:52.996117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.419 [2024-07-15 10:37:52.996437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.419 [2024-07-15 10:37:52.996465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.419 [2024-07-15 10:37:53.010080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.419 [2024-07-15 10:37:53.010410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.419 [2024-07-15 10:37:53.010439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.419 [2024-07-15 10:37:53.024020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.419 [2024-07-15 10:37:53.024325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.419 [2024-07-15 10:37:53.024355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.419 [2024-07-15 10:37:53.037943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.419 [2024-07-15 10:37:53.038285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.419 [2024-07-15 10:37:53.038314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.419 [2024-07-15 10:37:53.052079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.419 [2024-07-15 10:37:53.052394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.419 [2024-07-15 10:37:53.052424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.419 [2024-07-15 10:37:53.066197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.419 [2024-07-15 10:37:53.066528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.419 [2024-07-15 10:37:53.066558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.678 [2024-07-15 10:37:53.079795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.678 [2024-07-15 10:37:53.080101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.678 [2024-07-15 10:37:53.080128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.678 [2024-07-15 10:37:53.094016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.678 [2024-07-15 10:37:53.094329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.678 [2024-07-15 10:37:53.094360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.678 [2024-07-15 10:37:53.108089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.678 [2024-07-15 10:37:53.108445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.678 [2024-07-15 10:37:53.108474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.678 [2024-07-15 10:37:53.121629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.678 [2024-07-15 10:37:53.121890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.678 [2024-07-15 10:37:53.121916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.678 [2024-07-15 10:37:53.135120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.678 [2024-07-15 10:37:53.135401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.678 [2024-07-15 10:37:53.135434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.678 [2024-07-15 10:37:53.148210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.678 [2024-07-15 10:37:53.148500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.678 [2024-07-15 10:37:53.148526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.678 [2024-07-15 10:37:53.161394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.161674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.161700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.174550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.174845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.174871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.187837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.188135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.188162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.201103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.201381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.201409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.214363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.214713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.214741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.227590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.227844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.227871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.241030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.241312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.241339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.254343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.254640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.254667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.267644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.267900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.267938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.281014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.281277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.281304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.294576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.294858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.294896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.307926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.308216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.308243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.679 [2024-07-15 10:37:53.321177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.679 [2024-07-15 10:37:53.321515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.679 [2024-07-15 10:37:53.321542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.937 [2024-07-15 10:37:53.334322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.937 [2024-07-15 10:37:53.334601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.937 [2024-07-15 10:37:53.334628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.937 [2024-07-15 10:37:53.347852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.937 [2024-07-15 10:37:53.348151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.937 [2024-07-15 10:37:53.348177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.937 [2024-07-15 10:37:53.361070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.937 [2024-07-15 10:37:53.361360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.937 [2024-07-15 10:37:53.361387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.937 [2024-07-15 10:37:53.374425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15af6b0) with pdu=0x2000190ecc78 00:24:58.937 [2024-07-15 10:37:53.374703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:58.937 [2024-07-15 10:37:53.374730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.937 00:24:58.937 Latency(us) 00:24:58.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.938 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:58.938 nvme0n1 : 2.01 20218.28 78.98 0.00 0.00 6316.18 2451.53 16408.27 00:24:58.938 =================================================================================================================== 00:24:58.938 Total : 20218.28 78.98 0.00 0.00 6316.18 2451.53 16408.27 00:24:58.938 0 00:24:58.938 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:58.938 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:58.938 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:58.938 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:58.938 | .driver_specific 00:24:58.938 | .nvme_error 00:24:58.938 | .status_code 00:24:58.938 | .command_transient_transport_error' 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2410470 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2410470 ']' 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2410470 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2410470 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2410470' 00:24:59.196 killing process with pid 2410470 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2410470 00:24:59.196 Received shutdown signal, test time was about 2.000000 seconds 00:24:59.196 00:24:59.196 Latency(us) 00:24:59.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.196 =================================================================================================================== 00:24:59.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.196 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2410470 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2410882 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2410882 /var/tmp/bperf.sock 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2410882 ']' 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:59.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.454 10:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.454 [2024-07-15 10:37:53.990330] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:59.454 [2024-07-15 10:37:53.990412] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2410882 ] 00:24:59.454 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:59.454 Zero copy mechanism will not be used. 00:24:59.454 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.454 [2024-07-15 10:37:54.053377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.712 [2024-07-15 10:37:54.171024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.712 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.712 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:59.712 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:59.712 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:59.970 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:59.970 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.970 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.970 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.970 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.970 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:00.228 nvme0n1 00:25:00.228 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:00.228 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.228 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:00.228 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.228 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:00.228 10:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:00.486 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:00.486 Zero copy mechanism will not be used. 00:25:00.486 Running I/O for 2 seconds... 00:25:00.486 [2024-07-15 10:37:54.990321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.486 [2024-07-15 10:37:54.990737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.486 [2024-07-15 10:37:54.990780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.486 [2024-07-15 10:37:55.002755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.486 [2024-07-15 10:37:55.003130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.486 [2024-07-15 10:37:55.003176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.014442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.014799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.014828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.025715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.026068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.026097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.037394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.037608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.037636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.048860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.049306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.049336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.060384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.060817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.060846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.071005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.071374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.071404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.081225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.081674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.081710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.092166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.092601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.092631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.102585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.103014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.103045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.113717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.114115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.114156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.124576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.124982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.125016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.487 [2024-07-15 10:37:55.134320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.487 [2024-07-15 10:37:55.134700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.487 [2024-07-15 10:37:55.134730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.747 [2024-07-15 10:37:55.144155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.747 [2024-07-15 10:37:55.144568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.747 [2024-07-15 10:37:55.144597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.747 [2024-07-15 10:37:55.154073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.747 [2024-07-15 10:37:55.154538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.747 [2024-07-15 10:37:55.154567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.747 [2024-07-15 10:37:55.164920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.747 [2024-07-15 10:37:55.165222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.747 [2024-07-15 10:37:55.165251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.747 [2024-07-15 10:37:55.175568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.747 [2024-07-15 10:37:55.176029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.747 [2024-07-15 10:37:55.176059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.747 [2024-07-15 10:37:55.186157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.747 [2024-07-15 10:37:55.186550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.747 [2024-07-15 10:37:55.186579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.747 [2024-07-15 10:37:55.196013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.747 [2024-07-15 10:37:55.196375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.747 [2024-07-15 10:37:55.196405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.747 [2024-07-15 10:37:55.206674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.747 [2024-07-15 10:37:55.207055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.747 [2024-07-15 10:37:55.207084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.747 [2024-07-15 10:37:55.217327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.747 [2024-07-15 10:37:55.217747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.747 [2024-07-15 10:37:55.217777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.747 [2024-07-15 10:37:55.227659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.228077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.228107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.238172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.238573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.238602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.247996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.248400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.248431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.259118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.259483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.259520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.268066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.268433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.268463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.278068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.278452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.278481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.288229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.288630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.288659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.299190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.299558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.299587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.309094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.309431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.309462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.320762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.321187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.321218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.331542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.331884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.331914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.341431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.341831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.341861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.352019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.352410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.352440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.362056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.362399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.362430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.371955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.372338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.372368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.381615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.382037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.382067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:00.748 [2024-07-15 10:37:55.392676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:00.748 [2024-07-15 10:37:55.393087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.748 [2024-07-15 10:37:55.393120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.008 [2024-07-15 10:37:55.402453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.008 [2024-07-15 10:37:55.402861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.008 [2024-07-15 10:37:55.402898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.008 [2024-07-15 10:37:55.413568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.008 [2024-07-15 10:37:55.413980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.008 [2024-07-15 10:37:55.414012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.008 [2024-07-15 10:37:55.424435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.008 [2024-07-15 10:37:55.424774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.008 [2024-07-15 10:37:55.424803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.008 [2024-07-15 10:37:55.434709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.008 [2024-07-15 10:37:55.435121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.008 [2024-07-15 10:37:55.435151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.008 [2024-07-15 10:37:55.445725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.008 [2024-07-15 10:37:55.446068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.008 [2024-07-15 10:37:55.446097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.008 [2024-07-15 10:37:55.456759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.008 [2024-07-15 10:37:55.457133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.008 [2024-07-15 10:37:55.457163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.008 [2024-07-15 10:37:55.467120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.008 [2024-07-15 10:37:55.467529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.008 [2024-07-15 10:37:55.467558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.008 [2024-07-15 10:37:55.478587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.008 [2024-07-15 10:37:55.478940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.008 [2024-07-15 10:37:55.478970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.008 [2024-07-15 10:37:55.489530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.008 [2024-07-15 10:37:55.489961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.008 [2024-07-15 10:37:55.489990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.008 [2024-07-15 10:37:55.500714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.008 [2024-07-15 10:37:55.501106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.501135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.511468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.511840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.511870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.522094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.522564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.522593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.532604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.533039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.533077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.543328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.543699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.543728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.554050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.554470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.554499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.565036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.565492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.565522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.575872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.576402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.576431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.586423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.586868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.586904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.597200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.597565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.597594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.607908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.608251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.608281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.619027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.619370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.619400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.629670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.630043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.630073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.639758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.640111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.640140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.009 [2024-07-15 10:37:55.650274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.009 [2024-07-15 10:37:55.650683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.009 [2024-07-15 10:37:55.650712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.269 [2024-07-15 10:37:55.660126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.269 [2024-07-15 10:37:55.660511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.269 [2024-07-15 10:37:55.660541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.269 [2024-07-15 10:37:55.670132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.269 [2024-07-15 10:37:55.670512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.269 [2024-07-15 10:37:55.670541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.269 [2024-07-15 10:37:55.681167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.269 [2024-07-15 10:37:55.681569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.269 [2024-07-15 10:37:55.681598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.269 [2024-07-15 10:37:55.692582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.269 [2024-07-15 10:37:55.692923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.269 [2024-07-15 10:37:55.692953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.269 [2024-07-15 10:37:55.702527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.269 [2024-07-15 10:37:55.702910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.269 [2024-07-15 10:37:55.702939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.269 [2024-07-15 10:37:55.712798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.269 [2024-07-15 10:37:55.713138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.269 [2024-07-15 10:37:55.713168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.269 [2024-07-15 10:37:55.722229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.269 [2024-07-15 10:37:55.722540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.269 [2024-07-15 10:37:55.722570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.733116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.733462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.733491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.743046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.743374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.743402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.753217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.753601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.753630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.763218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.763571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.763600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.773841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.774182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.774214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.784383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.784708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.784737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.794220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.794634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.794663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.804807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.805118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.805156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.815105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.815463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.815492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.825681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.826031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.826060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.836358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.836767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.836797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.847038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.847401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.847430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.858314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.858671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.858701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.869266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.869597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.869626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.879414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.879799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.879828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.889742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.890129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.890159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.901342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.901672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.901700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.270 [2024-07-15 10:37:55.911117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.270 [2024-07-15 10:37:55.911458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.270 [2024-07-15 10:37:55.911487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.530 [2024-07-15 10:37:55.920839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.530 [2024-07-15 10:37:55.921211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:55.921244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:55.931279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:55.931606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:55.931635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:55.941379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:55.941758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:55.941787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:55.951715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:55.952110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:55.952140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:55.962359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:55.962841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:55.962871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:55.972526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:55.972874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:55.972915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:55.981532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:55.981923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:55.981964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:55.991230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:55.991616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:55.991646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.002544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.002968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.002998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.012577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.012970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.012999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.022837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.023208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.023239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.033271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.033608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.033638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.043458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.043760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.043804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.054414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.054816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.054846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.065614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.065986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.066016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.076442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.076818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.076847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.086839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.087244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.087273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.096725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.097118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.097147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.106897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.107216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.107245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.116920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.117319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.117347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.127485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.127895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.127933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.137955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.138363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.138391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.148354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.148745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.148773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.158975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.159314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.159342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.168594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.531 [2024-07-15 10:37:56.169000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.531 [2024-07-15 10:37:56.169029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.531 [2024-07-15 10:37:56.179591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.179969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.179999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.190967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.191389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.191417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.201104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.201435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.201463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.211247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.211639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.211668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.222062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.222446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.222475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.232015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.232340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.232367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.242560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.243060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.243091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.253295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.253686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.253721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.263924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.264368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.264412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.273441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.273957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.273990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.283536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.283970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.284000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.294214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.294569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.294597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.305050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.305374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.305402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.314752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.315113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.315142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.324779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.325103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.325136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.334536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.791 [2024-07-15 10:37:56.334835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.791 [2024-07-15 10:37:56.334866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.791 [2024-07-15 10:37:56.345537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.792 [2024-07-15 10:37:56.345841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.792 [2024-07-15 10:37:56.345891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.792 [2024-07-15 10:37:56.356500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.792 [2024-07-15 10:37:56.356840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.792 [2024-07-15 10:37:56.356890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.792 [2024-07-15 10:37:56.367466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.792 [2024-07-15 10:37:56.367894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.792 [2024-07-15 10:37:56.367933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.792 [2024-07-15 10:37:56.377993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.792 [2024-07-15 10:37:56.378505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.792 [2024-07-15 10:37:56.378534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.792 [2024-07-15 10:37:56.389321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.792 [2024-07-15 10:37:56.389660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.792 [2024-07-15 10:37:56.389688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:01.792 [2024-07-15 10:37:56.399308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.792 [2024-07-15 10:37:56.399648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.792 [2024-07-15 10:37:56.399676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:01.792 [2024-07-15 10:37:56.409775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.792 [2024-07-15 10:37:56.410039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.792 [2024-07-15 10:37:56.410068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.792 [2024-07-15 10:37:56.420454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.792 [2024-07-15 10:37:56.420856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.792 [2024-07-15 10:37:56.420890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:01.792 [2024-07-15 10:37:56.431332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:01.792 [2024-07-15 10:37:56.431717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.792 [2024-07-15 10:37:56.431745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.441536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.441927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.441955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.452794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.453086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.453116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.463120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.463495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.463523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.473244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.473615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.473643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.483940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.484281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.484308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.494622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.495040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.495069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.505457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.505756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.505785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.515836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.516275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.516304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.526950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.527403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.527438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.537569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.537978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.538006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.547869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.548261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.548292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.558569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.558977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.559005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.569462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.569951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.569979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.580230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.580549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.580577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.591006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.591381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.591410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.601178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.601572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.601601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.610603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.610998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.611026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.621778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.622188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.622216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.632660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.633069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.633097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.053 [2024-07-15 10:37:56.642791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.053 [2024-07-15 10:37:56.643196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.053 [2024-07-15 10:37:56.643224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.054 [2024-07-15 10:37:56.653640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.054 [2024-07-15 10:37:56.653998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.054 [2024-07-15 10:37:56.654027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.054 [2024-07-15 10:37:56.663160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.054 [2024-07-15 10:37:56.663562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.054 [2024-07-15 10:37:56.663608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.054 [2024-07-15 10:37:56.672682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.054 [2024-07-15 10:37:56.673069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.054 [2024-07-15 10:37:56.673099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.054 [2024-07-15 10:37:56.683513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.054 [2024-07-15 10:37:56.683926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.054 [2024-07-15 10:37:56.683968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.054 [2024-07-15 10:37:56.694452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.054 [2024-07-15 10:37:56.694893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.054 [2024-07-15 10:37:56.694922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.704305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.704724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.704755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.714516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.714842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.714870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.724471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.724729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.724772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.734755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.735133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.735163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.745429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.745821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.745849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.756255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.756631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.756659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.766995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.767397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.767425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.776137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.776517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.776546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.785919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.786238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.786266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.796871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.797293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.797326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.807210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.807585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.807613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.817915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.818276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.818303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.828702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.829055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.829083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.839308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.839669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.839697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.848616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.848923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.848950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.859390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.859716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.859744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.870105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.870419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.870447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.881118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.881405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.881433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.891735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.892129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.892158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.902403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.902826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.902854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.913959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.315 [2024-07-15 10:37:56.914306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.315 [2024-07-15 10:37:56.914334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.315 [2024-07-15 10:37:56.924170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.316 [2024-07-15 10:37:56.924507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.316 [2024-07-15 10:37:56.924537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.316 [2024-07-15 10:37:56.935055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.316 [2024-07-15 10:37:56.935422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.316 [2024-07-15 10:37:56.935450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.316 [2024-07-15 10:37:56.946283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.316 [2024-07-15 10:37:56.946709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.316 [2024-07-15 10:37:56.946737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:02.316 [2024-07-15 10:37:56.956167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.316 [2024-07-15 10:37:56.956547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.316 [2024-07-15 10:37:56.956590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.575 [2024-07-15 10:37:56.967487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.575 [2024-07-15 10:37:56.967823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.575 [2024-07-15 10:37:56.967852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:02.575 [2024-07-15 10:37:56.977662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13e4af0) with pdu=0x2000190fef90 00:25:02.575 [2024-07-15 10:37:56.977905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.575 [2024-07-15 10:37:56.977941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:02.575 00:25:02.575 Latency(us) 00:25:02.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.575 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:02.575 nvme0n1 : 2.01 2937.99 367.25 0.00 0.00 5432.03 4077.80 16602.45 00:25:02.575 =================================================================================================================== 00:25:02.575 Total : 2937.99 367.25 0.00 0.00 5432.03 4077.80 16602.45 00:25:02.575 0 00:25:02.575 10:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:02.575 10:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:02.575 10:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:02.575 | .driver_specific 00:25:02.575 | .nvme_error 00:25:02.575 | .status_code 00:25:02.575 | .command_transient_transport_error' 00:25:02.575 10:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 190 > 0 )) 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2410882 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2410882 ']' 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2410882 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2410882 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2410882' 00:25:02.833 killing process with pid 2410882 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2410882 00:25:02.833 Received shutdown signal, test time was about 2.000000 seconds 00:25:02.833 00:25:02.833 Latency(us) 00:25:02.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.833 =================================================================================================================== 00:25:02.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.833 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2410882 00:25:03.093 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2409515 00:25:03.093 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2409515 ']' 00:25:03.093 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2409515 00:25:03.093 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:03.094 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.094 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2409515 00:25:03.094 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:03.094 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:03.094 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2409515' 00:25:03.094 killing process with pid 2409515 00:25:03.094 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2409515 00:25:03.094 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2409515 00:25:03.353 00:25:03.353 real 0m15.520s 00:25:03.353 user 0m31.137s 00:25:03.353 sys 0m3.958s 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.354 ************************************ 00:25:03.354 END TEST nvmf_digest_error 00:25:03.354 ************************************ 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.354 rmmod nvme_tcp 00:25:03.354 rmmod nvme_fabrics 00:25:03.354 rmmod nvme_keyring 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2409515 ']' 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2409515 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2409515 ']' 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2409515 00:25:03.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2409515) - No such process 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2409515 is not found' 00:25:03.354 Process with pid 2409515 is not found 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.354 10:37:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.888 10:37:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:05.888 00:25:05.888 real 0m36.269s 00:25:05.888 user 1m4.697s 00:25:05.888 sys 0m9.569s 00:25:05.888 10:37:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.888 10:37:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:05.888 ************************************ 00:25:05.888 END TEST nvmf_digest 00:25:05.888 ************************************ 00:25:05.888 10:37:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:05.888 10:37:59 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:05.888 10:37:59 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:25:05.888 10:37:59 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:25:05.888 10:37:59 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:05.888 10:37:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:05.888 10:37:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.888 10:37:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:05.888 ************************************ 00:25:05.888 START TEST nvmf_bdevperf 00:25:05.888 ************************************ 00:25:05.888 10:37:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:05.888 * Looking for test storage... 00:25:05.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.888 10:38:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.889 10:38:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.791 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:07.792 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:07.792 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:07.792 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:07.792 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:07.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:25:07.792 00:25:07.792 --- 10.0.0.2 ping statistics --- 00:25:07.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.792 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:25:07.792 00:25:07.792 --- 10.0.0.1 ping statistics --- 00:25:07.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.792 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2413350 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2413350 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2413350 ']' 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.792 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:07.792 [2024-07-15 10:38:02.418007] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:07.792 [2024-07-15 10:38:02.418094] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.051 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.051 [2024-07-15 10:38:02.494230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:08.051 [2024-07-15 10:38:02.619816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.051 [2024-07-15 10:38:02.619883] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.051 [2024-07-15 10:38:02.619902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.051 [2024-07-15 10:38:02.619915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.051 [2024-07-15 10:38:02.619926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.051 [2024-07-15 10:38:02.619986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.051 [2024-07-15 10:38:02.620289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.051 [2024-07-15 10:38:02.620295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:08.310 [2024-07-15 10:38:02.777290] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:08.310 Malloc0 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:08.310 [2024-07-15 10:38:02.843028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:08.310 { 00:25:08.310 "params": { 00:25:08.310 "name": "Nvme$subsystem", 00:25:08.310 "trtype": "$TEST_TRANSPORT", 00:25:08.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.310 "adrfam": "ipv4", 00:25:08.310 "trsvcid": "$NVMF_PORT", 00:25:08.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.310 "hdgst": ${hdgst:-false}, 00:25:08.310 "ddgst": ${ddgst:-false} 00:25:08.310 }, 00:25:08.310 "method": "bdev_nvme_attach_controller" 00:25:08.310 } 00:25:08.310 EOF 00:25:08.310 )") 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:08.310 10:38:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:08.310 "params": { 00:25:08.310 "name": "Nvme1", 00:25:08.310 "trtype": "tcp", 00:25:08.310 "traddr": "10.0.0.2", 00:25:08.310 "adrfam": "ipv4", 00:25:08.310 "trsvcid": "4420", 00:25:08.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.310 "hdgst": false, 00:25:08.310 "ddgst": false 00:25:08.310 }, 00:25:08.310 "method": "bdev_nvme_attach_controller" 00:25:08.310 }' 00:25:08.310 [2024-07-15 10:38:02.892118] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:08.310 [2024-07-15 10:38:02.892211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413378 ] 00:25:08.310 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.310 [2024-07-15 10:38:02.951106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.568 [2024-07-15 10:38:03.065708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.826 Running I/O for 1 seconds... 00:25:09.758 00:25:09.758 Latency(us) 00:25:09.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.758 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:09.758 Verification LBA range: start 0x0 length 0x4000 00:25:09.758 Nvme1n1 : 1.01 8626.24 33.70 0.00 0.00 14773.44 2257.35 14757.74 00:25:09.758 =================================================================================================================== 00:25:09.758 Total : 8626.24 33.70 0.00 0.00 14773.44 2257.35 14757.74 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2413641 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:10.016 { 00:25:10.016 "params": { 00:25:10.016 "name": "Nvme$subsystem", 00:25:10.016 "trtype": "$TEST_TRANSPORT", 00:25:10.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.016 "adrfam": "ipv4", 00:25:10.016 "trsvcid": "$NVMF_PORT", 00:25:10.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.016 "hdgst": ${hdgst:-false}, 00:25:10.016 "ddgst": ${ddgst:-false} 00:25:10.016 }, 00:25:10.016 "method": "bdev_nvme_attach_controller" 00:25:10.016 } 00:25:10.016 EOF 00:25:10.016 )") 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:10.016 10:38:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:10.016 "params": { 00:25:10.016 "name": "Nvme1", 00:25:10.016 "trtype": "tcp", 00:25:10.016 "traddr": "10.0.0.2", 00:25:10.016 "adrfam": "ipv4", 00:25:10.016 "trsvcid": "4420", 00:25:10.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.016 "hdgst": false, 00:25:10.016 "ddgst": false 00:25:10.016 }, 00:25:10.016 "method": "bdev_nvme_attach_controller" 00:25:10.016 }' 00:25:10.274 [2024-07-15 10:38:04.690297] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:10.274 [2024-07-15 10:38:04.690372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413641 ] 00:25:10.274 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.274 [2024-07-15 10:38:04.750463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.274 [2024-07-15 10:38:04.860574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.888 Running I/O for 15 seconds... 00:25:13.420 10:38:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2413350 00:25:13.420 10:38:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:13.420 [2024-07-15 10:38:07.658248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.420 [2024-07-15 10:38:07.658302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.420 [2024-07-15 10:38:07.658338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.420 [2024-07-15 10:38:07.658360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.420 [2024-07-15 10:38:07.658382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.420 [2024-07-15 10:38:07.658399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.420 [2024-07-15 10:38:07.658417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.420 [2024-07-15 10:38:07.658432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.420 [2024-07-15 10:38:07.658461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.420 [2024-07-15 10:38:07.658481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.420 [2024-07-15 10:38:07.658499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.421 [2024-07-15 10:38:07.658515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.421 [2024-07-15 10:38:07.658550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.658977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.658991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.421 [2024-07-15 10:38:07.659937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.421 [2024-07-15 10:38:07.659952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.422 [2024-07-15 10:38:07.659966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.659982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.422 [2024-07-15 10:38:07.659995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.422 [2024-07-15 10:38:07.660024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.422 [2024-07-15 10:38:07.660053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.422 [2024-07-15 10:38:07.660571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.660968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.660988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.661017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.661045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.661074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.661110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.661139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.661187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.661214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.661263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.661294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.422 [2024-07-15 10:38:07.661325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.422 [2024-07-15 10:38:07.661339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.661980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.661994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.423 [2024-07-15 10:38:07.662526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104d4c0 is same with the state(5) to be set 00:25:13.423 [2024-07-15 10:38:07.662560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:13.423 [2024-07-15 10:38:07.662573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:13.423 [2024-07-15 10:38:07.662586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29256 len:8 PRP1 0x0 PRP2 0x0 00:25:13.423 [2024-07-15 10:38:07.662599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.423 [2024-07-15 10:38:07.662664] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x104d4c0 was disconnected and freed. reset controller. 00:25:13.423 [2024-07-15 10:38:07.666295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.423 [2024-07-15 10:38:07.666370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.423 [2024-07-15 10:38:07.667256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.423 [2024-07-15 10:38:07.667289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.423 [2024-07-15 10:38:07.667307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.423 [2024-07-15 10:38:07.667547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.423 [2024-07-15 10:38:07.667791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.423 [2024-07-15 10:38:07.667815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.423 [2024-07-15 10:38:07.667833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.671441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.680533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.424 [2024-07-15 10:38:07.680990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.424 [2024-07-15 10:38:07.681020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.424 [2024-07-15 10:38:07.681036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.424 [2024-07-15 10:38:07.681278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.424 [2024-07-15 10:38:07.681520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.424 [2024-07-15 10:38:07.681543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.424 [2024-07-15 10:38:07.681559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.685130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.694396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.424 [2024-07-15 10:38:07.694807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.424 [2024-07-15 10:38:07.694839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.424 [2024-07-15 10:38:07.694856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.424 [2024-07-15 10:38:07.695102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.424 [2024-07-15 10:38:07.695345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.424 [2024-07-15 10:38:07.695368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.424 [2024-07-15 10:38:07.695382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.698963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.708243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.424 [2024-07-15 10:38:07.708674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.424 [2024-07-15 10:38:07.708705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.424 [2024-07-15 10:38:07.708722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.424 [2024-07-15 10:38:07.708976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.424 [2024-07-15 10:38:07.709218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.424 [2024-07-15 10:38:07.709242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.424 [2024-07-15 10:38:07.709257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.712826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.722114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.424 [2024-07-15 10:38:07.722550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.424 [2024-07-15 10:38:07.722585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.424 [2024-07-15 10:38:07.722602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.424 [2024-07-15 10:38:07.722839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.424 [2024-07-15 10:38:07.723092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.424 [2024-07-15 10:38:07.723115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.424 [2024-07-15 10:38:07.723130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.726698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.735979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.424 [2024-07-15 10:38:07.736409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.424 [2024-07-15 10:38:07.736440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.424 [2024-07-15 10:38:07.736456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.424 [2024-07-15 10:38:07.736693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.424 [2024-07-15 10:38:07.736946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.424 [2024-07-15 10:38:07.736970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.424 [2024-07-15 10:38:07.736985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.740553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.749834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.424 [2024-07-15 10:38:07.750247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.424 [2024-07-15 10:38:07.750278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.424 [2024-07-15 10:38:07.750295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.424 [2024-07-15 10:38:07.750532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.424 [2024-07-15 10:38:07.750773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.424 [2024-07-15 10:38:07.750796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.424 [2024-07-15 10:38:07.750817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.754398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.763674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.424 [2024-07-15 10:38:07.764123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.424 [2024-07-15 10:38:07.764154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.424 [2024-07-15 10:38:07.764172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.424 [2024-07-15 10:38:07.764409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.424 [2024-07-15 10:38:07.764650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.424 [2024-07-15 10:38:07.764673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.424 [2024-07-15 10:38:07.764688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.768266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.777548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.424 [2024-07-15 10:38:07.777960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.424 [2024-07-15 10:38:07.777992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.424 [2024-07-15 10:38:07.778009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.424 [2024-07-15 10:38:07.778247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.424 [2024-07-15 10:38:07.778488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.424 [2024-07-15 10:38:07.778512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.424 [2024-07-15 10:38:07.778526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.782098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.791571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.424 [2024-07-15 10:38:07.792002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.424 [2024-07-15 10:38:07.792033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.424 [2024-07-15 10:38:07.792050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.424 [2024-07-15 10:38:07.792287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.424 [2024-07-15 10:38:07.792529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.424 [2024-07-15 10:38:07.792552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.424 [2024-07-15 10:38:07.792566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.796138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.805402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.424 [2024-07-15 10:38:07.805840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.424 [2024-07-15 10:38:07.805900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.424 [2024-07-15 10:38:07.805920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.424 [2024-07-15 10:38:07.806158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.424 [2024-07-15 10:38:07.806398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.424 [2024-07-15 10:38:07.806422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.424 [2024-07-15 10:38:07.806437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.424 [2024-07-15 10:38:07.810012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.424 [2024-07-15 10:38:07.819276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.425 [2024-07-15 10:38:07.819773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.425 [2024-07-15 10:38:07.819823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.425 [2024-07-15 10:38:07.819840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.425 [2024-07-15 10:38:07.820086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.425 [2024-07-15 10:38:07.820328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.425 [2024-07-15 10:38:07.820351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.425 [2024-07-15 10:38:07.820366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.425 [2024-07-15 10:38:07.823935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.425 [2024-07-15 10:38:07.833204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.425 [2024-07-15 10:38:07.833700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.425 [2024-07-15 10:38:07.833751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.425 [2024-07-15 10:38:07.833770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.425 [2024-07-15 10:38:07.834019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.425 [2024-07-15 10:38:07.834261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.425 [2024-07-15 10:38:07.834284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.425 [2024-07-15 10:38:07.834299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.425 [2024-07-15 10:38:07.837870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.425 [2024-07-15 10:38:07.847138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.425 [2024-07-15 10:38:07.847579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.425 [2024-07-15 10:38:07.847609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.425 [2024-07-15 10:38:07.847626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.425 [2024-07-15 10:38:07.847863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.425 [2024-07-15 10:38:07.848122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.425 [2024-07-15 10:38:07.848145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.425 [2024-07-15 10:38:07.848160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.425 [2024-07-15 10:38:07.851725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.425 [2024-07-15 10:38:07.860993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.425 [2024-07-15 10:38:07.861424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.425 [2024-07-15 10:38:07.861455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.425 [2024-07-15 10:38:07.861472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.425 [2024-07-15 10:38:07.861709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.425 [2024-07-15 10:38:07.861962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.425 [2024-07-15 10:38:07.861986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.425 [2024-07-15 10:38:07.862001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.425 [2024-07-15 10:38:07.865561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.425 [2024-07-15 10:38:07.874811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.425 [2024-07-15 10:38:07.875232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.425 [2024-07-15 10:38:07.875263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.425 [2024-07-15 10:38:07.875280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.425 [2024-07-15 10:38:07.875517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.425 [2024-07-15 10:38:07.875758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.425 [2024-07-15 10:38:07.875781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.425 [2024-07-15 10:38:07.875795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.425 [2024-07-15 10:38:07.879368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.425 [2024-07-15 10:38:07.888836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.425 [2024-07-15 10:38:07.889301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.425 [2024-07-15 10:38:07.889332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.425 [2024-07-15 10:38:07.889349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.425 [2024-07-15 10:38:07.889587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.425 [2024-07-15 10:38:07.889828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.425 [2024-07-15 10:38:07.889850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.425 [2024-07-15 10:38:07.889865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.425 [2024-07-15 10:38:07.893447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.425 [2024-07-15 10:38:07.902715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.425 [2024-07-15 10:38:07.903131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.425 [2024-07-15 10:38:07.903162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.425 [2024-07-15 10:38:07.903179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.425 [2024-07-15 10:38:07.903416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.425 [2024-07-15 10:38:07.903657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.425 [2024-07-15 10:38:07.903680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.425 [2024-07-15 10:38:07.903695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.425 [2024-07-15 10:38:07.907270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.425 [2024-07-15 10:38:07.916756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.425 [2024-07-15 10:38:07.917190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.425 [2024-07-15 10:38:07.917221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.425 [2024-07-15 10:38:07.917238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.425 [2024-07-15 10:38:07.917475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.425 [2024-07-15 10:38:07.917716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.425 [2024-07-15 10:38:07.917739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.425 [2024-07-15 10:38:07.917754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.425 [2024-07-15 10:38:07.921331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.425 [2024-07-15 10:38:07.930599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.425 [2024-07-15 10:38:07.931040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.425 [2024-07-15 10:38:07.931070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.425 [2024-07-15 10:38:07.931088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.425 [2024-07-15 10:38:07.931325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.425 [2024-07-15 10:38:07.931566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.425 [2024-07-15 10:38:07.931590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.425 [2024-07-15 10:38:07.931605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.426 [2024-07-15 10:38:07.935184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.426 [2024-07-15 10:38:07.944451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.426 [2024-07-15 10:38:07.944854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.426 [2024-07-15 10:38:07.944893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.426 [2024-07-15 10:38:07.944919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.426 [2024-07-15 10:38:07.945157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.426 [2024-07-15 10:38:07.945398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.426 [2024-07-15 10:38:07.945422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.426 [2024-07-15 10:38:07.945437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.426 [2024-07-15 10:38:07.949016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.426 [2024-07-15 10:38:07.958491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.426 [2024-07-15 10:38:07.958920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.426 [2024-07-15 10:38:07.958951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.426 [2024-07-15 10:38:07.958969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.426 [2024-07-15 10:38:07.959206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.426 [2024-07-15 10:38:07.959447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.426 [2024-07-15 10:38:07.959470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.426 [2024-07-15 10:38:07.959486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.426 [2024-07-15 10:38:07.963061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.426 [2024-07-15 10:38:07.972331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.426 [2024-07-15 10:38:07.972791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.426 [2024-07-15 10:38:07.972839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.426 [2024-07-15 10:38:07.972856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.426 [2024-07-15 10:38:07.973102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.426 [2024-07-15 10:38:07.973345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.426 [2024-07-15 10:38:07.973368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.426 [2024-07-15 10:38:07.973383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.426 [2024-07-15 10:38:07.976955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.426 [2024-07-15 10:38:07.986249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.426 [2024-07-15 10:38:07.986779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.426 [2024-07-15 10:38:07.986828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.426 [2024-07-15 10:38:07.986846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.426 [2024-07-15 10:38:07.987094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.426 [2024-07-15 10:38:07.987335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.426 [2024-07-15 10:38:07.987364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.426 [2024-07-15 10:38:07.987379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.426 [2024-07-15 10:38:07.990954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.426 [2024-07-15 10:38:08.000226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.426 [2024-07-15 10:38:08.000721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.426 [2024-07-15 10:38:08.000771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.426 [2024-07-15 10:38:08.000788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.426 [2024-07-15 10:38:08.001038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.426 [2024-07-15 10:38:08.001280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.426 [2024-07-15 10:38:08.001303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.426 [2024-07-15 10:38:08.001317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.426 [2024-07-15 10:38:08.004899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.426 [2024-07-15 10:38:08.014178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.426 [2024-07-15 10:38:08.014606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.426 [2024-07-15 10:38:08.014637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.426 [2024-07-15 10:38:08.014654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.426 [2024-07-15 10:38:08.014902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.426 [2024-07-15 10:38:08.015144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.426 [2024-07-15 10:38:08.015167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.426 [2024-07-15 10:38:08.015182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.426 [2024-07-15 10:38:08.018763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.426 [2024-07-15 10:38:08.028036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.426 [2024-07-15 10:38:08.028537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.426 [2024-07-15 10:38:08.028588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.426 [2024-07-15 10:38:08.028605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.426 [2024-07-15 10:38:08.028843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.426 [2024-07-15 10:38:08.029093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.426 [2024-07-15 10:38:08.029117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.426 [2024-07-15 10:38:08.029132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.426 [2024-07-15 10:38:08.032699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.426 [2024-07-15 10:38:08.041977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.426 [2024-07-15 10:38:08.042388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.426 [2024-07-15 10:38:08.042419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.426 [2024-07-15 10:38:08.042436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.426 [2024-07-15 10:38:08.042673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.426 [2024-07-15 10:38:08.042925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.426 [2024-07-15 10:38:08.042949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.426 [2024-07-15 10:38:08.042964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.426 [2024-07-15 10:38:08.046531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.426 [2024-07-15 10:38:08.055819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.426 [2024-07-15 10:38:08.056232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.426 [2024-07-15 10:38:08.056263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.426 [2024-07-15 10:38:08.056280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.426 [2024-07-15 10:38:08.056517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.426 [2024-07-15 10:38:08.056758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.426 [2024-07-15 10:38:08.056781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.426 [2024-07-15 10:38:08.056795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.426 [2024-07-15 10:38:08.060395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.685 [2024-07-15 10:38:08.069667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.685 [2024-07-15 10:38:08.070085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.685 [2024-07-15 10:38:08.070116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.685 [2024-07-15 10:38:08.070134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.685 [2024-07-15 10:38:08.070371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.685 [2024-07-15 10:38:08.070612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.685 [2024-07-15 10:38:08.070635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.685 [2024-07-15 10:38:08.070650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.685 [2024-07-15 10:38:08.074226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.685 [2024-07-15 10:38:08.083856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.685 [2024-07-15 10:38:08.084273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.685 [2024-07-15 10:38:08.084305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.685 [2024-07-15 10:38:08.084322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.685 [2024-07-15 10:38:08.084566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.685 [2024-07-15 10:38:08.084808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.685 [2024-07-15 10:38:08.084831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.685 [2024-07-15 10:38:08.084846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.685 [2024-07-15 10:38:08.088421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.685 [2024-07-15 10:38:08.097909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.685 [2024-07-15 10:38:08.098491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.685 [2024-07-15 10:38:08.098545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.685 [2024-07-15 10:38:08.098562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.685 [2024-07-15 10:38:08.098799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.685 [2024-07-15 10:38:08.099052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.685 [2024-07-15 10:38:08.099076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.685 [2024-07-15 10:38:08.099091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.685 [2024-07-15 10:38:08.102657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.685 [2024-07-15 10:38:08.111927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.685 [2024-07-15 10:38:08.112438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.685 [2024-07-15 10:38:08.112490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.685 [2024-07-15 10:38:08.112507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.685 [2024-07-15 10:38:08.112743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.685 [2024-07-15 10:38:08.112995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.685 [2024-07-15 10:38:08.113019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.685 [2024-07-15 10:38:08.113034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.685 [2024-07-15 10:38:08.116600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.685 [2024-07-15 10:38:08.125860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.685 [2024-07-15 10:38:08.126269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.685 [2024-07-15 10:38:08.126300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.685 [2024-07-15 10:38:08.126318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.685 [2024-07-15 10:38:08.126555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.685 [2024-07-15 10:38:08.126796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.685 [2024-07-15 10:38:08.126819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.685 [2024-07-15 10:38:08.126839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.685 [2024-07-15 10:38:08.130414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.685 [2024-07-15 10:38:08.139896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.685 [2024-07-15 10:38:08.140323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.685 [2024-07-15 10:38:08.140354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.685 [2024-07-15 10:38:08.140371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.685 [2024-07-15 10:38:08.140608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.685 [2024-07-15 10:38:08.140849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.685 [2024-07-15 10:38:08.140872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.685 [2024-07-15 10:38:08.140899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.685 [2024-07-15 10:38:08.144462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.685 [2024-07-15 10:38:08.153759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.685 [2024-07-15 10:38:08.154195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.685 [2024-07-15 10:38:08.154226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.685 [2024-07-15 10:38:08.154243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.685 [2024-07-15 10:38:08.154480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.685 [2024-07-15 10:38:08.154722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.685 [2024-07-15 10:38:08.154744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.685 [2024-07-15 10:38:08.154759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.685 [2024-07-15 10:38:08.158332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.685 [2024-07-15 10:38:08.167592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.685 [2024-07-15 10:38:08.168020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.685 [2024-07-15 10:38:08.168052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.685 [2024-07-15 10:38:08.168070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.685 [2024-07-15 10:38:08.168307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.685 [2024-07-15 10:38:08.168548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.685 [2024-07-15 10:38:08.168571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.168586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.686 [2024-07-15 10:38:08.172167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.686 [2024-07-15 10:38:08.181438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.686 [2024-07-15 10:38:08.181888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.686 [2024-07-15 10:38:08.181920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.686 [2024-07-15 10:38:08.181937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.686 [2024-07-15 10:38:08.182174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.686 [2024-07-15 10:38:08.182415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.686 [2024-07-15 10:38:08.182438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.182452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.686 [2024-07-15 10:38:08.186027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.686 [2024-07-15 10:38:08.195291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.686 [2024-07-15 10:38:08.195782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.686 [2024-07-15 10:38:08.195812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.686 [2024-07-15 10:38:08.195830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.686 [2024-07-15 10:38:08.196076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.686 [2024-07-15 10:38:08.196318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.686 [2024-07-15 10:38:08.196341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.196356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.686 [2024-07-15 10:38:08.199935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.686 [2024-07-15 10:38:08.209199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.686 [2024-07-15 10:38:08.209604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.686 [2024-07-15 10:38:08.209635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.686 [2024-07-15 10:38:08.209652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.686 [2024-07-15 10:38:08.209900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.686 [2024-07-15 10:38:08.210143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.686 [2024-07-15 10:38:08.210166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.210180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.686 [2024-07-15 10:38:08.213744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.686 [2024-07-15 10:38:08.223213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.686 [2024-07-15 10:38:08.223646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.686 [2024-07-15 10:38:08.223677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.686 [2024-07-15 10:38:08.223693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.686 [2024-07-15 10:38:08.223942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.686 [2024-07-15 10:38:08.224190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.686 [2024-07-15 10:38:08.224213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.224228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.686 [2024-07-15 10:38:08.227792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.686 [2024-07-15 10:38:08.237058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.686 [2024-07-15 10:38:08.237479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.686 [2024-07-15 10:38:08.237509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.686 [2024-07-15 10:38:08.237526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.686 [2024-07-15 10:38:08.237763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.686 [2024-07-15 10:38:08.238016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.686 [2024-07-15 10:38:08.238040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.238056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.686 [2024-07-15 10:38:08.241617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.686 [2024-07-15 10:38:08.251089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.686 [2024-07-15 10:38:08.251467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.686 [2024-07-15 10:38:08.251498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.686 [2024-07-15 10:38:08.251515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.686 [2024-07-15 10:38:08.251752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.686 [2024-07-15 10:38:08.252004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.686 [2024-07-15 10:38:08.252028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.252043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.686 [2024-07-15 10:38:08.255607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.686 [2024-07-15 10:38:08.265081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.686 [2024-07-15 10:38:08.265514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.686 [2024-07-15 10:38:08.265545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.686 [2024-07-15 10:38:08.265562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.686 [2024-07-15 10:38:08.265799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.686 [2024-07-15 10:38:08.266051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.686 [2024-07-15 10:38:08.266075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.266089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.686 [2024-07-15 10:38:08.269683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.686 [2024-07-15 10:38:08.278958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.686 [2024-07-15 10:38:08.279382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.686 [2024-07-15 10:38:08.279412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.686 [2024-07-15 10:38:08.279429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.686 [2024-07-15 10:38:08.279666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.686 [2024-07-15 10:38:08.279917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.686 [2024-07-15 10:38:08.279949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.279964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.686 [2024-07-15 10:38:08.283532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.686 [2024-07-15 10:38:08.292797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.686 [2024-07-15 10:38:08.293204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.686 [2024-07-15 10:38:08.293235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.686 [2024-07-15 10:38:08.293253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.686 [2024-07-15 10:38:08.293489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.686 [2024-07-15 10:38:08.293731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.686 [2024-07-15 10:38:08.293754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.293768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.686 [2024-07-15 10:38:08.297361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.686 [2024-07-15 10:38:08.306646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.686 [2024-07-15 10:38:08.307086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.686 [2024-07-15 10:38:08.307117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.686 [2024-07-15 10:38:08.307134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.686 [2024-07-15 10:38:08.307370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.686 [2024-07-15 10:38:08.307612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.686 [2024-07-15 10:38:08.307635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.686 [2024-07-15 10:38:08.307650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.687 [2024-07-15 10:38:08.311225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.687 [2024-07-15 10:38:08.320484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.687 [2024-07-15 10:38:08.320890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.687 [2024-07-15 10:38:08.320922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.687 [2024-07-15 10:38:08.320945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.687 [2024-07-15 10:38:08.321183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.687 [2024-07-15 10:38:08.321424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.687 [2024-07-15 10:38:08.321447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.687 [2024-07-15 10:38:08.321461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.687 [2024-07-15 10:38:08.325037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.945 [2024-07-15 10:38:08.334319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.945 [2024-07-15 10:38:08.334807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.945 [2024-07-15 10:38:08.334855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.945 [2024-07-15 10:38:08.334873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.945 [2024-07-15 10:38:08.335123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.945 [2024-07-15 10:38:08.335364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.945 [2024-07-15 10:38:08.335387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.945 [2024-07-15 10:38:08.335402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.945 [2024-07-15 10:38:08.338975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.945 [2024-07-15 10:38:08.348231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.945 [2024-07-15 10:38:08.348658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.945 [2024-07-15 10:38:08.348688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.945 [2024-07-15 10:38:08.348705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.945 [2024-07-15 10:38:08.348953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.945 [2024-07-15 10:38:08.349195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.945 [2024-07-15 10:38:08.349218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.945 [2024-07-15 10:38:08.349233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.945 [2024-07-15 10:38:08.352798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.945 [2024-07-15 10:38:08.362076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.945 [2024-07-15 10:38:08.362507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.945 [2024-07-15 10:38:08.362538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.945 [2024-07-15 10:38:08.362555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.945 [2024-07-15 10:38:08.362792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.945 [2024-07-15 10:38:08.363046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.945 [2024-07-15 10:38:08.363076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.945 [2024-07-15 10:38:08.363092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.945 [2024-07-15 10:38:08.366657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.945 [2024-07-15 10:38:08.375921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.945 [2024-07-15 10:38:08.376343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.945 [2024-07-15 10:38:08.376374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.945 [2024-07-15 10:38:08.376391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.945 [2024-07-15 10:38:08.376628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.945 [2024-07-15 10:38:08.376869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.376903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.376918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.380482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.389742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.390151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.390181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.390198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.390435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.390676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.390699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.390714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.394288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.403790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.404236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.404267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.404284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.404521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.404762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.404785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.404800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.408372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.417633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.418045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.418076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.418093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.418330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.418571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.418594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.418609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.422191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.431673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.432106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.432136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.432154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.432391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.432632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.432655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.432670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.436245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.445506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.445930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.445961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.445979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.446216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.446457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.446480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.446494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.450071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.459339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.459763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.459794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.459811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.460064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.460307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.460331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.460345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.463916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.473190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.473611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.473643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.473661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.473908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.474160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.474183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.474199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.477764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.487060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.487538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.487568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.487585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.487821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.488072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.488096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.488111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.491678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.500963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.501516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.501574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.501591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.501828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.502079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.502103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.502125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.505689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.514978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.515487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.515538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.515555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.515792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.516043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.946 [2024-07-15 10:38:08.516067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.946 [2024-07-15 10:38:08.516082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.946 [2024-07-15 10:38:08.519648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.946 [2024-07-15 10:38:08.528918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.946 [2024-07-15 10:38:08.529341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.946 [2024-07-15 10:38:08.529371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.946 [2024-07-15 10:38:08.529388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.946 [2024-07-15 10:38:08.529625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.946 [2024-07-15 10:38:08.529866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.947 [2024-07-15 10:38:08.529902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.947 [2024-07-15 10:38:08.529918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.947 [2024-07-15 10:38:08.533479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.947 [2024-07-15 10:38:08.542750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.947 [2024-07-15 10:38:08.543161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.947 [2024-07-15 10:38:08.543192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.947 [2024-07-15 10:38:08.543209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.947 [2024-07-15 10:38:08.543446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.947 [2024-07-15 10:38:08.543687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.947 [2024-07-15 10:38:08.543709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.947 [2024-07-15 10:38:08.543724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.947 [2024-07-15 10:38:08.547298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.947 [2024-07-15 10:38:08.556769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.947 [2024-07-15 10:38:08.557226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.947 [2024-07-15 10:38:08.557257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.947 [2024-07-15 10:38:08.557275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.947 [2024-07-15 10:38:08.557511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.947 [2024-07-15 10:38:08.557753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.947 [2024-07-15 10:38:08.557776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.947 [2024-07-15 10:38:08.557790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.947 [2024-07-15 10:38:08.561368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.947 [2024-07-15 10:38:08.570632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.947 [2024-07-15 10:38:08.571050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.947 [2024-07-15 10:38:08.571081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.947 [2024-07-15 10:38:08.571098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.947 [2024-07-15 10:38:08.571336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.947 [2024-07-15 10:38:08.571577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.947 [2024-07-15 10:38:08.571599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.947 [2024-07-15 10:38:08.571614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.947 [2024-07-15 10:38:08.575189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.947 [2024-07-15 10:38:08.584655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.947 [2024-07-15 10:38:08.585096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.947 [2024-07-15 10:38:08.585127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:13.947 [2024-07-15 10:38:08.585144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:13.947 [2024-07-15 10:38:08.585381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:13.947 [2024-07-15 10:38:08.585622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.947 [2024-07-15 10:38:08.585645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.947 [2024-07-15 10:38:08.585660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.947 [2024-07-15 10:38:08.589233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.207 [2024-07-15 10:38:08.598522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.207 [2024-07-15 10:38:08.598937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.207 [2024-07-15 10:38:08.598968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.207 [2024-07-15 10:38:08.598985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.207 [2024-07-15 10:38:08.599222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.207 [2024-07-15 10:38:08.599469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.207 [2024-07-15 10:38:08.599492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.207 [2024-07-15 10:38:08.599507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.207 [2024-07-15 10:38:08.603083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.207 [2024-07-15 10:38:08.612549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.207 [2024-07-15 10:38:08.612983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.207 [2024-07-15 10:38:08.613015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.207 [2024-07-15 10:38:08.613032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.207 [2024-07-15 10:38:08.613269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.207 [2024-07-15 10:38:08.613510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.207 [2024-07-15 10:38:08.613533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.207 [2024-07-15 10:38:08.613547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.207 [2024-07-15 10:38:08.617121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.207 [2024-07-15 10:38:08.626578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.207 [2024-07-15 10:38:08.627005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.207 [2024-07-15 10:38:08.627036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.207 [2024-07-15 10:38:08.627054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.207 [2024-07-15 10:38:08.627291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.207 [2024-07-15 10:38:08.627532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.207 [2024-07-15 10:38:08.627555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.207 [2024-07-15 10:38:08.627570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.207 [2024-07-15 10:38:08.631144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.207 [2024-07-15 10:38:08.640415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.207 [2024-07-15 10:38:08.640818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.207 [2024-07-15 10:38:08.640849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.207 [2024-07-15 10:38:08.640866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.207 [2024-07-15 10:38:08.641114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.207 [2024-07-15 10:38:08.641356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.207 [2024-07-15 10:38:08.641379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.207 [2024-07-15 10:38:08.641394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.207 [2024-07-15 10:38:08.644971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.207 [2024-07-15 10:38:08.654275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.207 [2024-07-15 10:38:08.654676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.207 [2024-07-15 10:38:08.654706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.207 [2024-07-15 10:38:08.654723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.207 [2024-07-15 10:38:08.654971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.207 [2024-07-15 10:38:08.655213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.207 [2024-07-15 10:38:08.655236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.207 [2024-07-15 10:38:08.655251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.207 [2024-07-15 10:38:08.658843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.207 [2024-07-15 10:38:08.668118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.207 [2024-07-15 10:38:08.668592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.207 [2024-07-15 10:38:08.668640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.207 [2024-07-15 10:38:08.668657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.207 [2024-07-15 10:38:08.668905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.207 [2024-07-15 10:38:08.669148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.207 [2024-07-15 10:38:08.669171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.207 [2024-07-15 10:38:08.669186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.207 [2024-07-15 10:38:08.672758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.207 [2024-07-15 10:38:08.682038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.207 [2024-07-15 10:38:08.682519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.207 [2024-07-15 10:38:08.682568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.207 [2024-07-15 10:38:08.682585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.207 [2024-07-15 10:38:08.682822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.207 [2024-07-15 10:38:08.683073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.207 [2024-07-15 10:38:08.683097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.207 [2024-07-15 10:38:08.683112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.207 [2024-07-15 10:38:08.686770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.207 [2024-07-15 10:38:08.696059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.696527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.696558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.696581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.696819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.697070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.697094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.697109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.700692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.208 [2024-07-15 10:38:08.709963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.710402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.710433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.710450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.710687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.710940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.710964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.710979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.714543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.208 [2024-07-15 10:38:08.723845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.724293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.724325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.724343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.724580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.724822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.724845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.724860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.728434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.208 [2024-07-15 10:38:08.737697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.738114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.738145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.738163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.738400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.738642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.738670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.738686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.742263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.208 [2024-07-15 10:38:08.751729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.752136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.752167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.752185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.752422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.752663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.752686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.752700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.756275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.208 [2024-07-15 10:38:08.765738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.766149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.766180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.766196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.766433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.766674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.766697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.766711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.770285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.208 [2024-07-15 10:38:08.779744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.780150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.780181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.780198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.780435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.780676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.780700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.780714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.784283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.208 [2024-07-15 10:38:08.793754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.794162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.794193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.794210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.794446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.794687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.794710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.794724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.798300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.208 [2024-07-15 10:38:08.807777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.808193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.808223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.808241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.808478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.808719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.808742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.808757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.812329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.208 [2024-07-15 10:38:08.821807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.822196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.822227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.822244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.822481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.822723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.822746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.822760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.826355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.208 [2024-07-15 10:38:08.835828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.208 [2024-07-15 10:38:08.836258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.208 [2024-07-15 10:38:08.836289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.208 [2024-07-15 10:38:08.836306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.208 [2024-07-15 10:38:08.836548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.208 [2024-07-15 10:38:08.836790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.208 [2024-07-15 10:38:08.836813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.208 [2024-07-15 10:38:08.836828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.208 [2024-07-15 10:38:08.840399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.209 [2024-07-15 10:38:08.849685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.209 [2024-07-15 10:38:08.850103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.209 [2024-07-15 10:38:08.850134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.209 [2024-07-15 10:38:08.850151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.209 [2024-07-15 10:38:08.850388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.209 [2024-07-15 10:38:08.850629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.209 [2024-07-15 10:38:08.850653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.209 [2024-07-15 10:38:08.850668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.209 [2024-07-15 10:38:08.854248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.466 [2024-07-15 10:38:08.863721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.466 [2024-07-15 10:38:08.864157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.466 [2024-07-15 10:38:08.864187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.466 [2024-07-15 10:38:08.864205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.466 [2024-07-15 10:38:08.864442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.466 [2024-07-15 10:38:08.864683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.466 [2024-07-15 10:38:08.864706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.466 [2024-07-15 10:38:08.864721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.466 [2024-07-15 10:38:08.868299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.466 [2024-07-15 10:38:08.877566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.466 [2024-07-15 10:38:08.877997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.466 [2024-07-15 10:38:08.878028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.466 [2024-07-15 10:38:08.878046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.466 [2024-07-15 10:38:08.878283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:08.878524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:08.878548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:08.878571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:08.882147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:08.891410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:08.891822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:08.891853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:08.891870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:08.892118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:08.892359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:08.892383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:08.892398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:08.895965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:08.905447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:08.905850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:08.905888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:08.905908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:08.906145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:08.906386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:08.906409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:08.906423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:08.909999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:08.919470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:08.919909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:08.919940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:08.919957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:08.920194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:08.920435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:08.920458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:08.920473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:08.924045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:08.933322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:08.933730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:08.933760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:08.933778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:08.934025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:08.934267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:08.934290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:08.934305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:08.937871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:08.947354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:08.947755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:08.947786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:08.947803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:08.948050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:08.948293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:08.948315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:08.948330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:08.951898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:08.961367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:08.961789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:08.961820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:08.961837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:08.962082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:08.962324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:08.962347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:08.962362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:08.965932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:08.975393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:08.975819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:08.975850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:08.975866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:08.976110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:08.976358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:08.976381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:08.976396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:08.979968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:08.989229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:08.989626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:08.989656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:08.989673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:08.989920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:08.990162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:08.990185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:08.990200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:08.993764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:09.003243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:09.003650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:09.003681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:09.003698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:09.003947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:09.004189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:09.004213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:09.004227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:09.007790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:09.017259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:09.017661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:09.017692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:09.017709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:09.017958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.467 [2024-07-15 10:38:09.018200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.467 [2024-07-15 10:38:09.018223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.467 [2024-07-15 10:38:09.018238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.467 [2024-07-15 10:38:09.021808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.467 [2024-07-15 10:38:09.031292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.467 [2024-07-15 10:38:09.031690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.467 [2024-07-15 10:38:09.031720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.467 [2024-07-15 10:38:09.031738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.467 [2024-07-15 10:38:09.031987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.468 [2024-07-15 10:38:09.032228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.468 [2024-07-15 10:38:09.032251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.468 [2024-07-15 10:38:09.032266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.468 [2024-07-15 10:38:09.035828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.468 [2024-07-15 10:38:09.045316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.468 [2024-07-15 10:38:09.045743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.468 [2024-07-15 10:38:09.045774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.468 [2024-07-15 10:38:09.045791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.468 [2024-07-15 10:38:09.046039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.468 [2024-07-15 10:38:09.046281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.468 [2024-07-15 10:38:09.046304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.468 [2024-07-15 10:38:09.046319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.468 [2024-07-15 10:38:09.049884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.468 [2024-07-15 10:38:09.059343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.468 [2024-07-15 10:38:09.059768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.468 [2024-07-15 10:38:09.059799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.468 [2024-07-15 10:38:09.059816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.468 [2024-07-15 10:38:09.060063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.468 [2024-07-15 10:38:09.060305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.468 [2024-07-15 10:38:09.060328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.468 [2024-07-15 10:38:09.060343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.468 [2024-07-15 10:38:09.063909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.468 [2024-07-15 10:38:09.073373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.468 [2024-07-15 10:38:09.073810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.468 [2024-07-15 10:38:09.073841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.468 [2024-07-15 10:38:09.073864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.468 [2024-07-15 10:38:09.074111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.468 [2024-07-15 10:38:09.074354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.468 [2024-07-15 10:38:09.074377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.468 [2024-07-15 10:38:09.074392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.468 [2024-07-15 10:38:09.077963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.468 [2024-07-15 10:38:09.087226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.468 [2024-07-15 10:38:09.087624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.468 [2024-07-15 10:38:09.087654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.468 [2024-07-15 10:38:09.087671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.468 [2024-07-15 10:38:09.087917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.468 [2024-07-15 10:38:09.088159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.468 [2024-07-15 10:38:09.088183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.468 [2024-07-15 10:38:09.088198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.468 [2024-07-15 10:38:09.091761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.468 [2024-07-15 10:38:09.101241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.468 [2024-07-15 10:38:09.101668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.468 [2024-07-15 10:38:09.101698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.468 [2024-07-15 10:38:09.101716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.468 [2024-07-15 10:38:09.101962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.468 [2024-07-15 10:38:09.102204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.468 [2024-07-15 10:38:09.102228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.468 [2024-07-15 10:38:09.102242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.468 [2024-07-15 10:38:09.105802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.468 [2024-07-15 10:38:09.115138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.468 [2024-07-15 10:38:09.115539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.468 [2024-07-15 10:38:09.115570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.468 [2024-07-15 10:38:09.115587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.468 [2024-07-15 10:38:09.115824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.727 [2024-07-15 10:38:09.116074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.727 [2024-07-15 10:38:09.116104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.727 [2024-07-15 10:38:09.116120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.727 [2024-07-15 10:38:09.119686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.727 [2024-07-15 10:38:09.129156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.727 [2024-07-15 10:38:09.129556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.727 [2024-07-15 10:38:09.129587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.727 [2024-07-15 10:38:09.129604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.727 [2024-07-15 10:38:09.129841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.727 [2024-07-15 10:38:09.130092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.727 [2024-07-15 10:38:09.130116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.727 [2024-07-15 10:38:09.130130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.727 [2024-07-15 10:38:09.133688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.727 [2024-07-15 10:38:09.143173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.727 [2024-07-15 10:38:09.143625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.727 [2024-07-15 10:38:09.143656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.727 [2024-07-15 10:38:09.143674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.727 [2024-07-15 10:38:09.143922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.727 [2024-07-15 10:38:09.144164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.727 [2024-07-15 10:38:09.144187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.727 [2024-07-15 10:38:09.144202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.727 [2024-07-15 10:38:09.147761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.727 [2024-07-15 10:38:09.157033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.727 [2024-07-15 10:38:09.157473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.727 [2024-07-15 10:38:09.157503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.727 [2024-07-15 10:38:09.157521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.727 [2024-07-15 10:38:09.157758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.727 [2024-07-15 10:38:09.158007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.727 [2024-07-15 10:38:09.158031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.727 [2024-07-15 10:38:09.158046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.727 [2024-07-15 10:38:09.161609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.727 [2024-07-15 10:38:09.170870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.727 [2024-07-15 10:38:09.171303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.727 [2024-07-15 10:38:09.171334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.727 [2024-07-15 10:38:09.171351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.727 [2024-07-15 10:38:09.171588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.727 [2024-07-15 10:38:09.171829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.727 [2024-07-15 10:38:09.171852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.727 [2024-07-15 10:38:09.171867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.727 [2024-07-15 10:38:09.175448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.727 [2024-07-15 10:38:09.184709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.727 [2024-07-15 10:38:09.185094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.727 [2024-07-15 10:38:09.185125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.727 [2024-07-15 10:38:09.185142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.727 [2024-07-15 10:38:09.185379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.727 [2024-07-15 10:38:09.185621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.727 [2024-07-15 10:38:09.185644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.727 [2024-07-15 10:38:09.185658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.727 [2024-07-15 10:38:09.189231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.727 [2024-07-15 10:38:09.198702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.727 [2024-07-15 10:38:09.199126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.727 [2024-07-15 10:38:09.199157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.727 [2024-07-15 10:38:09.199174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.727 [2024-07-15 10:38:09.199411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.727 [2024-07-15 10:38:09.199652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.727 [2024-07-15 10:38:09.199675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.727 [2024-07-15 10:38:09.199689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.727 [2024-07-15 10:38:09.203261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.727 [2024-07-15 10:38:09.212723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.727 [2024-07-15 10:38:09.213155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.727 [2024-07-15 10:38:09.213186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.727 [2024-07-15 10:38:09.213203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.727 [2024-07-15 10:38:09.213445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.727 [2024-07-15 10:38:09.213686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.727 [2024-07-15 10:38:09.213710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.727 [2024-07-15 10:38:09.213724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.727 [2024-07-15 10:38:09.217320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.727 [2024-07-15 10:38:09.226580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.727 [2024-07-15 10:38:09.227015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.727 [2024-07-15 10:38:09.227047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.727 [2024-07-15 10:38:09.227065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.727 [2024-07-15 10:38:09.227303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.727 [2024-07-15 10:38:09.227543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.727 [2024-07-15 10:38:09.227566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.727 [2024-07-15 10:38:09.227581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.727 [2024-07-15 10:38:09.231152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.727 [2024-07-15 10:38:09.240414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.727 [2024-07-15 10:38:09.240839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.727 [2024-07-15 10:38:09.240869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.727 [2024-07-15 10:38:09.240897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.727 [2024-07-15 10:38:09.241135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.727 [2024-07-15 10:38:09.241377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.727 [2024-07-15 10:38:09.241400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.727 [2024-07-15 10:38:09.241414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.727 [2024-07-15 10:38:09.244980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.727 [2024-07-15 10:38:09.254267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.728 [2024-07-15 10:38:09.254709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.728 [2024-07-15 10:38:09.254739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.728 [2024-07-15 10:38:09.254756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.728 [2024-07-15 10:38:09.255002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.728 [2024-07-15 10:38:09.255244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.728 [2024-07-15 10:38:09.255267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.728 [2024-07-15 10:38:09.255288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.728 [2024-07-15 10:38:09.258853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.728 [2024-07-15 10:38:09.268109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.728 [2024-07-15 10:38:09.268514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.728 [2024-07-15 10:38:09.268544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.728 [2024-07-15 10:38:09.268561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.728 [2024-07-15 10:38:09.268798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.728 [2024-07-15 10:38:09.269049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.728 [2024-07-15 10:38:09.269074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.728 [2024-07-15 10:38:09.269088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.728 [2024-07-15 10:38:09.272650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.728 [2024-07-15 10:38:09.282113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.728 [2024-07-15 10:38:09.282517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.728 [2024-07-15 10:38:09.282548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.728 [2024-07-15 10:38:09.282565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.728 [2024-07-15 10:38:09.282801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.728 [2024-07-15 10:38:09.283053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.728 [2024-07-15 10:38:09.283077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.728 [2024-07-15 10:38:09.283092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.728 [2024-07-15 10:38:09.286656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.728 [2024-07-15 10:38:09.296124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.728 [2024-07-15 10:38:09.296534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.728 [2024-07-15 10:38:09.296565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.728 [2024-07-15 10:38:09.296582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.728 [2024-07-15 10:38:09.296820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.728 [2024-07-15 10:38:09.297080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.728 [2024-07-15 10:38:09.297105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.728 [2024-07-15 10:38:09.297120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.728 [2024-07-15 10:38:09.300691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.728 [2024-07-15 10:38:09.309949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.728 [2024-07-15 10:38:09.310360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.728 [2024-07-15 10:38:09.310391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.728 [2024-07-15 10:38:09.310408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.728 [2024-07-15 10:38:09.310646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.728 [2024-07-15 10:38:09.310897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.728 [2024-07-15 10:38:09.310920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.728 [2024-07-15 10:38:09.310935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.728 [2024-07-15 10:38:09.314498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.728 [2024-07-15 10:38:09.323967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.728 [2024-07-15 10:38:09.324389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.728 [2024-07-15 10:38:09.324419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.728 [2024-07-15 10:38:09.324436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.728 [2024-07-15 10:38:09.324673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.728 [2024-07-15 10:38:09.324925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.728 [2024-07-15 10:38:09.324949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.728 [2024-07-15 10:38:09.324964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.728 [2024-07-15 10:38:09.328523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.728 [2024-07-15 10:38:09.337991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.728 [2024-07-15 10:38:09.338430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.728 [2024-07-15 10:38:09.338460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.728 [2024-07-15 10:38:09.338477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.728 [2024-07-15 10:38:09.338714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.728 [2024-07-15 10:38:09.338967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.728 [2024-07-15 10:38:09.338991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.728 [2024-07-15 10:38:09.339006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.728 [2024-07-15 10:38:09.342566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.728 [2024-07-15 10:38:09.352035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.728 [2024-07-15 10:38:09.352461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.728 [2024-07-15 10:38:09.352492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.728 [2024-07-15 10:38:09.352509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.728 [2024-07-15 10:38:09.352746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.728 [2024-07-15 10:38:09.353004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.728 [2024-07-15 10:38:09.353028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.728 [2024-07-15 10:38:09.353043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.728 [2024-07-15 10:38:09.356608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.728 [2024-07-15 10:38:09.365864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.728 [2024-07-15 10:38:09.366268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.728 [2024-07-15 10:38:09.366299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.728 [2024-07-15 10:38:09.366316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.728 [2024-07-15 10:38:09.366554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.728 [2024-07-15 10:38:09.366795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.728 [2024-07-15 10:38:09.366817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.728 [2024-07-15 10:38:09.366832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.728 [2024-07-15 10:38:09.370402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.987 [2024-07-15 10:38:09.379867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.987 [2024-07-15 10:38:09.380296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.987 [2024-07-15 10:38:09.380327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.987 [2024-07-15 10:38:09.380344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.987 [2024-07-15 10:38:09.380581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.987 [2024-07-15 10:38:09.380822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.987 [2024-07-15 10:38:09.380844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.987 [2024-07-15 10:38:09.380859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.987 [2024-07-15 10:38:09.384429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.987 [2024-07-15 10:38:09.393895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.987 [2024-07-15 10:38:09.394297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.987 [2024-07-15 10:38:09.394328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.987 [2024-07-15 10:38:09.394345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.987 [2024-07-15 10:38:09.394582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.987 [2024-07-15 10:38:09.394823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.987 [2024-07-15 10:38:09.394846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.987 [2024-07-15 10:38:09.394861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.987 [2024-07-15 10:38:09.398442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.987 [2024-07-15 10:38:09.407913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.987 [2024-07-15 10:38:09.408313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.987 [2024-07-15 10:38:09.408343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.987 [2024-07-15 10:38:09.408360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.987 [2024-07-15 10:38:09.408597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.987 [2024-07-15 10:38:09.408838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.987 [2024-07-15 10:38:09.408861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.987 [2024-07-15 10:38:09.408885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.987 [2024-07-15 10:38:09.412450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.987 [2024-07-15 10:38:09.421924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.987 [2024-07-15 10:38:09.422328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.987 [2024-07-15 10:38:09.422359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.987 [2024-07-15 10:38:09.422376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.987 [2024-07-15 10:38:09.422614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.987 [2024-07-15 10:38:09.422854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.987 [2024-07-15 10:38:09.422886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.987 [2024-07-15 10:38:09.422903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.987 [2024-07-15 10:38:09.426466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.987 [2024-07-15 10:38:09.435939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.987 [2024-07-15 10:38:09.436365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.987 [2024-07-15 10:38:09.436395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.987 [2024-07-15 10:38:09.436412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.987 [2024-07-15 10:38:09.436649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.987 [2024-07-15 10:38:09.436899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.987 [2024-07-15 10:38:09.436923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.987 [2024-07-15 10:38:09.436937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.987 [2024-07-15 10:38:09.440500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.987 [2024-07-15 10:38:09.449760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.987 [2024-07-15 10:38:09.450183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.987 [2024-07-15 10:38:09.450214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.987 [2024-07-15 10:38:09.450237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.987 [2024-07-15 10:38:09.450474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.987 [2024-07-15 10:38:09.450716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.987 [2024-07-15 10:38:09.450739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.987 [2024-07-15 10:38:09.450754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.987 [2024-07-15 10:38:09.454323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.987 [2024-07-15 10:38:09.463784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.987 [2024-07-15 10:38:09.464196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.987 [2024-07-15 10:38:09.464226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.987 [2024-07-15 10:38:09.464243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.987 [2024-07-15 10:38:09.464480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.987 [2024-07-15 10:38:09.464720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.987 [2024-07-15 10:38:09.464743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.987 [2024-07-15 10:38:09.464758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.987 [2024-07-15 10:38:09.468328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.987 [2024-07-15 10:38:09.477788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.987 [2024-07-15 10:38:09.478201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.987 [2024-07-15 10:38:09.478232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.987 [2024-07-15 10:38:09.478249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.987 [2024-07-15 10:38:09.478486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.987 [2024-07-15 10:38:09.478727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.987 [2024-07-15 10:38:09.478749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.478764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.482338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.491818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.492264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.988 [2024-07-15 10:38:09.492294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.988 [2024-07-15 10:38:09.492311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.988 [2024-07-15 10:38:09.492548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.988 [2024-07-15 10:38:09.492790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.988 [2024-07-15 10:38:09.492819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.492835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.496405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.505667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.506079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.988 [2024-07-15 10:38:09.506110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.988 [2024-07-15 10:38:09.506127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.988 [2024-07-15 10:38:09.506364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.988 [2024-07-15 10:38:09.506605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.988 [2024-07-15 10:38:09.506628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.506642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.510210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.519676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.520117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.988 [2024-07-15 10:38:09.520148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.988 [2024-07-15 10:38:09.520165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.988 [2024-07-15 10:38:09.520401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.988 [2024-07-15 10:38:09.520642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.988 [2024-07-15 10:38:09.520665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.520680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.524246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.533504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.533907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.988 [2024-07-15 10:38:09.533938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.988 [2024-07-15 10:38:09.533955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.988 [2024-07-15 10:38:09.534192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.988 [2024-07-15 10:38:09.534433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.988 [2024-07-15 10:38:09.534455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.534470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.538043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.547515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.547967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.988 [2024-07-15 10:38:09.547998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.988 [2024-07-15 10:38:09.548015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.988 [2024-07-15 10:38:09.548252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.988 [2024-07-15 10:38:09.548493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.988 [2024-07-15 10:38:09.548515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.548530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.552102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.561355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.561794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.988 [2024-07-15 10:38:09.561825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.988 [2024-07-15 10:38:09.561841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.988 [2024-07-15 10:38:09.562088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.988 [2024-07-15 10:38:09.562330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.988 [2024-07-15 10:38:09.562353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.562368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.565934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.575190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.575617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.988 [2024-07-15 10:38:09.575647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.988 [2024-07-15 10:38:09.575664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.988 [2024-07-15 10:38:09.575911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.988 [2024-07-15 10:38:09.576153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.988 [2024-07-15 10:38:09.576176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.576191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.579750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.589219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.589623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.988 [2024-07-15 10:38:09.589654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.988 [2024-07-15 10:38:09.589671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.988 [2024-07-15 10:38:09.589930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.988 [2024-07-15 10:38:09.590171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.988 [2024-07-15 10:38:09.590195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.590209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.593768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.603246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.603655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.988 [2024-07-15 10:38:09.603685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.988 [2024-07-15 10:38:09.603702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.988 [2024-07-15 10:38:09.603949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.988 [2024-07-15 10:38:09.604191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.988 [2024-07-15 10:38:09.604214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.604229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.607789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.617263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.617693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.988 [2024-07-15 10:38:09.617724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.988 [2024-07-15 10:38:09.617740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.988 [2024-07-15 10:38:09.617988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.988 [2024-07-15 10:38:09.618230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.988 [2024-07-15 10:38:09.618253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.988 [2024-07-15 10:38:09.618267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.988 [2024-07-15 10:38:09.621828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.988 [2024-07-15 10:38:09.631088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.988 [2024-07-15 10:38:09.631503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.989 [2024-07-15 10:38:09.631534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:14.989 [2024-07-15 10:38:09.631551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:14.989 [2024-07-15 10:38:09.631788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:14.989 [2024-07-15 10:38:09.632040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.989 [2024-07-15 10:38:09.632064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.989 [2024-07-15 10:38:09.632084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.247 [2024-07-15 10:38:09.635647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.247 [2024-07-15 10:38:09.645118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.247 [2024-07-15 10:38:09.645520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.247 [2024-07-15 10:38:09.645551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.247 [2024-07-15 10:38:09.645568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.247 [2024-07-15 10:38:09.645805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.247 [2024-07-15 10:38:09.646056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.247 [2024-07-15 10:38:09.646080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.247 [2024-07-15 10:38:09.646095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.247 [2024-07-15 10:38:09.649658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.247 [2024-07-15 10:38:09.659141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.247 [2024-07-15 10:38:09.659566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.247 [2024-07-15 10:38:09.659597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.247 [2024-07-15 10:38:09.659614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.247 [2024-07-15 10:38:09.659851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.247 [2024-07-15 10:38:09.660103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.247 [2024-07-15 10:38:09.660127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.247 [2024-07-15 10:38:09.660142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.247 [2024-07-15 10:38:09.663700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.247 [2024-07-15 10:38:09.673166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.247 [2024-07-15 10:38:09.673575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.247 [2024-07-15 10:38:09.673606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.247 [2024-07-15 10:38:09.673623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.247 [2024-07-15 10:38:09.673860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.247 [2024-07-15 10:38:09.674112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.247 [2024-07-15 10:38:09.674136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.247 [2024-07-15 10:38:09.674151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.247 [2024-07-15 10:38:09.677713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.247 [2024-07-15 10:38:09.687183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.247 [2024-07-15 10:38:09.687625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.247 [2024-07-15 10:38:09.687656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.247 [2024-07-15 10:38:09.687672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.247 [2024-07-15 10:38:09.687919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.247 [2024-07-15 10:38:09.688160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.247 [2024-07-15 10:38:09.688184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.247 [2024-07-15 10:38:09.688198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.247 [2024-07-15 10:38:09.691757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.247 [2024-07-15 10:38:09.701032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.247 [2024-07-15 10:38:09.701430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.247 [2024-07-15 10:38:09.701460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.247 [2024-07-15 10:38:09.701477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.247 [2024-07-15 10:38:09.701714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.247 [2024-07-15 10:38:09.701966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.247 [2024-07-15 10:38:09.701990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.247 [2024-07-15 10:38:09.702005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.247 [2024-07-15 10:38:09.705796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.247 [2024-07-15 10:38:09.715063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.247 [2024-07-15 10:38:09.715463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.247 [2024-07-15 10:38:09.715493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.247 [2024-07-15 10:38:09.715510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.247 [2024-07-15 10:38:09.715747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.247 [2024-07-15 10:38:09.715999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.247 [2024-07-15 10:38:09.716023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.247 [2024-07-15 10:38:09.716039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.247 [2024-07-15 10:38:09.719603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.247 [2024-07-15 10:38:09.729088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.247 [2024-07-15 10:38:09.729555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.247 [2024-07-15 10:38:09.729585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.247 [2024-07-15 10:38:09.729602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.247 [2024-07-15 10:38:09.729839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.247 [2024-07-15 10:38:09.730097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.247 [2024-07-15 10:38:09.730121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.247 [2024-07-15 10:38:09.730136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.247 [2024-07-15 10:38:09.733698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.247 [2024-07-15 10:38:09.742965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.247 [2024-07-15 10:38:09.743405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.743436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.743452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.743690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.743943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.743966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.743981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.248 [2024-07-15 10:38:09.747544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.248 [2024-07-15 10:38:09.756805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.248 [2024-07-15 10:38:09.757271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.757302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.757319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.757556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.757798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.757821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.757836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.248 [2024-07-15 10:38:09.761406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.248 [2024-07-15 10:38:09.770660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.248 [2024-07-15 10:38:09.771097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.771128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.771145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.771382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.771622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.771645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.771660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.248 [2024-07-15 10:38:09.775239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.248 [2024-07-15 10:38:09.784499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.248 [2024-07-15 10:38:09.784989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.785020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.785037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.785274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.785515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.785538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.785553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.248 [2024-07-15 10:38:09.789125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.248 [2024-07-15 10:38:09.798391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.248 [2024-07-15 10:38:09.798816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.798846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.798863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.799111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.799354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.799376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.799391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.248 [2024-07-15 10:38:09.802962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.248 [2024-07-15 10:38:09.812222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.248 [2024-07-15 10:38:09.812626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.812656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.812673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.812921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.813163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.813185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.813200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.248 [2024-07-15 10:38:09.816762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.248 [2024-07-15 10:38:09.826242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.248 [2024-07-15 10:38:09.826666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.826714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.826737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.826986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.827228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.827251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.827266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.248 [2024-07-15 10:38:09.830828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.248 [2024-07-15 10:38:09.840096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.248 [2024-07-15 10:38:09.840495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.840542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.840560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.840797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.841048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.841071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.841086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.248 [2024-07-15 10:38:09.844648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.248 [2024-07-15 10:38:09.854125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.248 [2024-07-15 10:38:09.854577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.854607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.854625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.854861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.855113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.855136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.855151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.248 [2024-07-15 10:38:09.858712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.248 [2024-07-15 10:38:09.867989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.248 [2024-07-15 10:38:09.868454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.868501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.868518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.868756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.869009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.869039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.869054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.248 [2024-07-15 10:38:09.872615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.248 [2024-07-15 10:38:09.881874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.248 [2024-07-15 10:38:09.882306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.248 [2024-07-15 10:38:09.882337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.248 [2024-07-15 10:38:09.882354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.248 [2024-07-15 10:38:09.882591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.248 [2024-07-15 10:38:09.882832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.248 [2024-07-15 10:38:09.882855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.248 [2024-07-15 10:38:09.882870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.249 [2024-07-15 10:38:09.886444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.508 [2024-07-15 10:38:09.895701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.508 [2024-07-15 10:38:09.896110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.508 [2024-07-15 10:38:09.896140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.508 [2024-07-15 10:38:09.896158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.508 [2024-07-15 10:38:09.896394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.508 [2024-07-15 10:38:09.896635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.508 [2024-07-15 10:38:09.896658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.508 [2024-07-15 10:38:09.896673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.508 [2024-07-15 10:38:09.900251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.508 [2024-07-15 10:38:09.909713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.508 [2024-07-15 10:38:09.910154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.508 [2024-07-15 10:38:09.910184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.508 [2024-07-15 10:38:09.910201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.508 [2024-07-15 10:38:09.910439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.508 [2024-07-15 10:38:09.910680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.508 [2024-07-15 10:38:09.910703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.508 [2024-07-15 10:38:09.910717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.508 [2024-07-15 10:38:09.914295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.508 [2024-07-15 10:38:09.923555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.508 [2024-07-15 10:38:09.923993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.508 [2024-07-15 10:38:09.924024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.508 [2024-07-15 10:38:09.924041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.508 [2024-07-15 10:38:09.924279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.508 [2024-07-15 10:38:09.924520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.508 [2024-07-15 10:38:09.924543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.508 [2024-07-15 10:38:09.924557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.508 [2024-07-15 10:38:09.928137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.508 [2024-07-15 10:38:09.937406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.508 [2024-07-15 10:38:09.937834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.508 [2024-07-15 10:38:09.937865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.508 [2024-07-15 10:38:09.937891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.508 [2024-07-15 10:38:09.938130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.508 [2024-07-15 10:38:09.938372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.508 [2024-07-15 10:38:09.938395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.508 [2024-07-15 10:38:09.938410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.508 [2024-07-15 10:38:09.941985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.508 [2024-07-15 10:38:09.951260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.508 [2024-07-15 10:38:09.951686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.508 [2024-07-15 10:38:09.951717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.508 [2024-07-15 10:38:09.951734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.508 [2024-07-15 10:38:09.951983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.508 [2024-07-15 10:38:09.952225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.508 [2024-07-15 10:38:09.952248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.508 [2024-07-15 10:38:09.952263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.508 [2024-07-15 10:38:09.955834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.508 [2024-07-15 10:38:09.965105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.508 [2024-07-15 10:38:09.965532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.508 [2024-07-15 10:38:09.965562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.508 [2024-07-15 10:38:09.965580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.508 [2024-07-15 10:38:09.965823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.508 [2024-07-15 10:38:09.966074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.508 [2024-07-15 10:38:09.966098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.508 [2024-07-15 10:38:09.966113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.508 [2024-07-15 10:38:09.969681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.508 [2024-07-15 10:38:09.978961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.508 [2024-07-15 10:38:09.979407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.508 [2024-07-15 10:38:09.979438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.508 [2024-07-15 10:38:09.979455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.508 [2024-07-15 10:38:09.979691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.508 [2024-07-15 10:38:09.979942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.508 [2024-07-15 10:38:09.979966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.508 [2024-07-15 10:38:09.979981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.508 [2024-07-15 10:38:09.983562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.508 [2024-07-15 10:38:09.992846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.508 [2024-07-15 10:38:09.993402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.508 [2024-07-15 10:38:09.993452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.508 [2024-07-15 10:38:09.993469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.508 [2024-07-15 10:38:09.993706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.508 [2024-07-15 10:38:09.993958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.508 [2024-07-15 10:38:09.993982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.508 [2024-07-15 10:38:09.993997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.508 [2024-07-15 10:38:09.997565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.508 [2024-07-15 10:38:10.007375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.508 [2024-07-15 10:38:10.007810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.508 [2024-07-15 10:38:10.007844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.508 [2024-07-15 10:38:10.007862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.508 [2024-07-15 10:38:10.008113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.508 [2024-07-15 10:38:10.008357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.508 [2024-07-15 10:38:10.008380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.508 [2024-07-15 10:38:10.008401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.508 [2024-07-15 10:38:10.011987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.508 [2024-07-15 10:38:10.021271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.508 [2024-07-15 10:38:10.021682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.508 [2024-07-15 10:38:10.021715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.508 [2024-07-15 10:38:10.021733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.508 [2024-07-15 10:38:10.021981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.508 [2024-07-15 10:38:10.022224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.509 [2024-07-15 10:38:10.022247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.509 [2024-07-15 10:38:10.022263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.509 [2024-07-15 10:38:10.025826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.509 [2024-07-15 10:38:10.035321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.509 [2024-07-15 10:38:10.035730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.509 [2024-07-15 10:38:10.035761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.509 [2024-07-15 10:38:10.035778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.509 [2024-07-15 10:38:10.036025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.509 [2024-07-15 10:38:10.036268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.509 [2024-07-15 10:38:10.036292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.509 [2024-07-15 10:38:10.036307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.509 [2024-07-15 10:38:10.039872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.509 [2024-07-15 10:38:10.049361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.509 [2024-07-15 10:38:10.049820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.509 [2024-07-15 10:38:10.049851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.509 [2024-07-15 10:38:10.049868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.509 [2024-07-15 10:38:10.050116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.509 [2024-07-15 10:38:10.050357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.509 [2024-07-15 10:38:10.050380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.509 [2024-07-15 10:38:10.050395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.509 [2024-07-15 10:38:10.053966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.509 [2024-07-15 10:38:10.063248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.509 [2024-07-15 10:38:10.063773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.509 [2024-07-15 10:38:10.063824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.509 [2024-07-15 10:38:10.063841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.509 [2024-07-15 10:38:10.064089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.509 [2024-07-15 10:38:10.064332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.509 [2024-07-15 10:38:10.064355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.509 [2024-07-15 10:38:10.064370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.509 [2024-07-15 10:38:10.068045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.509 [2024-07-15 10:38:10.077123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.509 [2024-07-15 10:38:10.077542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.509 [2024-07-15 10:38:10.077573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.509 [2024-07-15 10:38:10.077591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.509 [2024-07-15 10:38:10.077828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.509 [2024-07-15 10:38:10.078080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.509 [2024-07-15 10:38:10.078105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.509 [2024-07-15 10:38:10.078120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.509 [2024-07-15 10:38:10.081693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.509 [2024-07-15 10:38:10.090979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.509 [2024-07-15 10:38:10.091376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.509 [2024-07-15 10:38:10.091407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.509 [2024-07-15 10:38:10.091425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.509 [2024-07-15 10:38:10.091662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.509 [2024-07-15 10:38:10.091925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.509 [2024-07-15 10:38:10.091948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.509 [2024-07-15 10:38:10.091963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.509 [2024-07-15 10:38:10.095524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.509 [2024-07-15 10:38:10.105025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.509 [2024-07-15 10:38:10.105455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.509 [2024-07-15 10:38:10.105486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.509 [2024-07-15 10:38:10.105503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.509 [2024-07-15 10:38:10.105746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.509 [2024-07-15 10:38:10.105997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.509 [2024-07-15 10:38:10.106031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.509 [2024-07-15 10:38:10.106045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.509 [2024-07-15 10:38:10.109604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.509 [2024-07-15 10:38:10.118869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.509 [2024-07-15 10:38:10.119362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.509 [2024-07-15 10:38:10.119392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.509 [2024-07-15 10:38:10.119409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.509 [2024-07-15 10:38:10.119646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.509 [2024-07-15 10:38:10.119896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.509 [2024-07-15 10:38:10.119929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.509 [2024-07-15 10:38:10.119943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.509 [2024-07-15 10:38:10.123505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.509 [2024-07-15 10:38:10.132763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.509 [2024-07-15 10:38:10.133237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.509 [2024-07-15 10:38:10.133288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.509 [2024-07-15 10:38:10.133305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.509 [2024-07-15 10:38:10.133542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.509 [2024-07-15 10:38:10.133783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.509 [2024-07-15 10:38:10.133806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.509 [2024-07-15 10:38:10.133820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.509 [2024-07-15 10:38:10.137396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.509 [2024-07-15 10:38:10.146665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.509 [2024-07-15 10:38:10.147102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.509 [2024-07-15 10:38:10.147133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.509 [2024-07-15 10:38:10.147151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.509 [2024-07-15 10:38:10.147388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.509 [2024-07-15 10:38:10.147629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.509 [2024-07-15 10:38:10.147652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.509 [2024-07-15 10:38:10.147667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.509 [2024-07-15 10:38:10.151248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.768 [2024-07-15 10:38:10.160511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.768 [2024-07-15 10:38:10.160918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.768 [2024-07-15 10:38:10.160950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.768 [2024-07-15 10:38:10.160968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.768 [2024-07-15 10:38:10.161206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.768 [2024-07-15 10:38:10.161447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.768 [2024-07-15 10:38:10.161470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.768 [2024-07-15 10:38:10.161484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.768 [2024-07-15 10:38:10.165061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.768 [2024-07-15 10:38:10.174527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.768 [2024-07-15 10:38:10.174955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.768 [2024-07-15 10:38:10.174987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.768 [2024-07-15 10:38:10.175004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.768 [2024-07-15 10:38:10.175242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.768 [2024-07-15 10:38:10.175482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.768 [2024-07-15 10:38:10.175505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.768 [2024-07-15 10:38:10.175520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.768 [2024-07-15 10:38:10.179089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.768 [2024-07-15 10:38:10.188560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.768 [2024-07-15 10:38:10.188992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.768 [2024-07-15 10:38:10.189023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.768 [2024-07-15 10:38:10.189041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.768 [2024-07-15 10:38:10.189278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.768 [2024-07-15 10:38:10.189519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.768 [2024-07-15 10:38:10.189542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.768 [2024-07-15 10:38:10.189556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.193123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.202591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.203025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.203061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.203079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.203316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.203557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.203580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.203595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.207165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.216428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.216856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.216894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.216913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.217151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.217391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.217414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.217429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.221004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.230262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.230660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.230690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.230707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.230956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.231198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.231220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.231235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.234799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.244297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.244726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.244757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.244774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.245022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.245270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.245294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.245309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.248873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.258149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.258573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.258603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.258620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.258857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.259108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.259132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.259147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.262710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.271991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.272392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.272422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.272439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.272676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.272929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.272962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.272978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.276546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.286030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.286433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.286464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.286481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.286718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.286969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.286993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.287008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.290572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.299868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.300278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.300309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.300326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.300564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.300805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.300828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.300842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.304424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.313908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.314313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.314344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.314361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.314598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.314839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.314862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.314886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.318456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.327731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.328181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.328212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.328228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.328466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.328706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.328728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.328744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.769 [2024-07-15 10:38:10.332319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.769 [2024-07-15 10:38:10.341573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.769 [2024-07-15 10:38:10.341974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.769 [2024-07-15 10:38:10.342005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.769 [2024-07-15 10:38:10.342027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.769 [2024-07-15 10:38:10.342265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.769 [2024-07-15 10:38:10.342506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.769 [2024-07-15 10:38:10.342529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.769 [2024-07-15 10:38:10.342544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.770 [2024-07-15 10:38:10.346113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.770 [2024-07-15 10:38:10.355578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.770 [2024-07-15 10:38:10.355996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.770 [2024-07-15 10:38:10.356028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.770 [2024-07-15 10:38:10.356045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.770 [2024-07-15 10:38:10.356283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.770 [2024-07-15 10:38:10.356524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.770 [2024-07-15 10:38:10.356547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.770 [2024-07-15 10:38:10.356562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.770 [2024-07-15 10:38:10.360133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.770 [2024-07-15 10:38:10.369611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.770 [2024-07-15 10:38:10.370023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.770 [2024-07-15 10:38:10.370053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.770 [2024-07-15 10:38:10.370071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.770 [2024-07-15 10:38:10.370308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.770 [2024-07-15 10:38:10.370549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.770 [2024-07-15 10:38:10.370571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.770 [2024-07-15 10:38:10.370586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.770 [2024-07-15 10:38:10.374167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.770 [2024-07-15 10:38:10.383662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.770 [2024-07-15 10:38:10.384112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.770 [2024-07-15 10:38:10.384144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.770 [2024-07-15 10:38:10.384161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.770 [2024-07-15 10:38:10.384398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.770 [2024-07-15 10:38:10.384639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.770 [2024-07-15 10:38:10.384662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.770 [2024-07-15 10:38:10.384682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.770 [2024-07-15 10:38:10.388264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.770 [2024-07-15 10:38:10.397536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.770 [2024-07-15 10:38:10.397964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.770 [2024-07-15 10:38:10.397995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.770 [2024-07-15 10:38:10.398012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.770 [2024-07-15 10:38:10.398249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.770 [2024-07-15 10:38:10.398490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.770 [2024-07-15 10:38:10.398512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.770 [2024-07-15 10:38:10.398527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.770 [2024-07-15 10:38:10.402107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.770 [2024-07-15 10:38:10.411375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.770 [2024-07-15 10:38:10.411778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.770 [2024-07-15 10:38:10.411808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:15.770 [2024-07-15 10:38:10.411825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:15.770 [2024-07-15 10:38:10.412072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:15.770 [2024-07-15 10:38:10.412314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.770 [2024-07-15 10:38:10.412337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.770 [2024-07-15 10:38:10.412351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.770 [2024-07-15 10:38:10.415924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.029 [2024-07-15 10:38:10.425410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.029 [2024-07-15 10:38:10.425809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.029 [2024-07-15 10:38:10.425840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.029 [2024-07-15 10:38:10.425858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.029 [2024-07-15 10:38:10.426108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.029 [2024-07-15 10:38:10.426351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.029 [2024-07-15 10:38:10.426374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.029 [2024-07-15 10:38:10.426388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.029 [2024-07-15 10:38:10.429964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.029 [2024-07-15 10:38:10.439453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.029 [2024-07-15 10:38:10.439860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.029 [2024-07-15 10:38:10.439898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.029 [2024-07-15 10:38:10.439917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.029 [2024-07-15 10:38:10.440153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.029 [2024-07-15 10:38:10.440395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.029 [2024-07-15 10:38:10.440418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.029 [2024-07-15 10:38:10.440433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.029 [2024-07-15 10:38:10.444006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.029 [2024-07-15 10:38:10.453482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.029 [2024-07-15 10:38:10.453910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.029 [2024-07-15 10:38:10.453941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.029 [2024-07-15 10:38:10.453958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.029 [2024-07-15 10:38:10.454196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.029 [2024-07-15 10:38:10.454436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.029 [2024-07-15 10:38:10.454459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.029 [2024-07-15 10:38:10.454474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.029 [2024-07-15 10:38:10.458051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.029 [2024-07-15 10:38:10.467313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.029 [2024-07-15 10:38:10.467732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.029 [2024-07-15 10:38:10.467763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.029 [2024-07-15 10:38:10.467780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.029 [2024-07-15 10:38:10.468028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.029 [2024-07-15 10:38:10.468269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.029 [2024-07-15 10:38:10.468292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.029 [2024-07-15 10:38:10.468307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.029 [2024-07-15 10:38:10.471873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.029 [2024-07-15 10:38:10.481141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.029 [2024-07-15 10:38:10.481586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.029 [2024-07-15 10:38:10.481617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.029 [2024-07-15 10:38:10.481635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.029 [2024-07-15 10:38:10.481890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.029 [2024-07-15 10:38:10.482133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.029 [2024-07-15 10:38:10.482156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.029 [2024-07-15 10:38:10.482171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.029 [2024-07-15 10:38:10.485732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.029 [2024-07-15 10:38:10.495014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.029 [2024-07-15 10:38:10.495440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.029 [2024-07-15 10:38:10.495471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.029 [2024-07-15 10:38:10.495489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.029 [2024-07-15 10:38:10.495726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.029 [2024-07-15 10:38:10.495980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.029 [2024-07-15 10:38:10.496005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.029 [2024-07-15 10:38:10.496019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.029 [2024-07-15 10:38:10.499588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.029 [2024-07-15 10:38:10.508851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.030 [2024-07-15 10:38:10.509355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.030 [2024-07-15 10:38:10.509405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.030 [2024-07-15 10:38:10.509423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.030 [2024-07-15 10:38:10.509660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.030 [2024-07-15 10:38:10.509910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.030 [2024-07-15 10:38:10.509942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.030 [2024-07-15 10:38:10.509956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.030 [2024-07-15 10:38:10.513522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.030 [2024-07-15 10:38:10.522781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.030 [2024-07-15 10:38:10.523278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.030 [2024-07-15 10:38:10.523330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.030 [2024-07-15 10:38:10.523347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.030 [2024-07-15 10:38:10.523584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.030 [2024-07-15 10:38:10.523825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.030 [2024-07-15 10:38:10.523848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.030 [2024-07-15 10:38:10.523869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.030 [2024-07-15 10:38:10.527439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.030 [2024-07-15 10:38:10.536691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.030 [2024-07-15 10:38:10.537194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.030 [2024-07-15 10:38:10.537242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.030 [2024-07-15 10:38:10.537259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.030 [2024-07-15 10:38:10.537495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.030 [2024-07-15 10:38:10.537736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.030 [2024-07-15 10:38:10.537758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.030 [2024-07-15 10:38:10.537773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.030 [2024-07-15 10:38:10.541341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.030 [2024-07-15 10:38:10.550594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.030 [2024-07-15 10:38:10.551024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.030 [2024-07-15 10:38:10.551054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.030 [2024-07-15 10:38:10.551072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.030 [2024-07-15 10:38:10.551308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.030 [2024-07-15 10:38:10.551550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.030 [2024-07-15 10:38:10.551572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.030 [2024-07-15 10:38:10.551587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.030 [2024-07-15 10:38:10.555163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.030 [2024-07-15 10:38:10.564421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.030 [2024-07-15 10:38:10.564846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.030 [2024-07-15 10:38:10.564885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.030 [2024-07-15 10:38:10.564905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.030 [2024-07-15 10:38:10.565143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.030 [2024-07-15 10:38:10.565383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.030 [2024-07-15 10:38:10.565406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.030 [2024-07-15 10:38:10.565421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.030 [2024-07-15 10:38:10.568991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.030 [2024-07-15 10:38:10.578255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.030 [2024-07-15 10:38:10.578658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.030 [2024-07-15 10:38:10.578694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.030 [2024-07-15 10:38:10.578712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.030 [2024-07-15 10:38:10.578962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.030 [2024-07-15 10:38:10.579204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.030 [2024-07-15 10:38:10.579227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.030 [2024-07-15 10:38:10.579242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.030 [2024-07-15 10:38:10.582809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.030 [2024-07-15 10:38:10.592289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.030 [2024-07-15 10:38:10.592728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.030 [2024-07-15 10:38:10.592759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.030 [2024-07-15 10:38:10.592777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.030 [2024-07-15 10:38:10.593024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.030 [2024-07-15 10:38:10.593267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.030 [2024-07-15 10:38:10.593290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.030 [2024-07-15 10:38:10.593305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.030 [2024-07-15 10:38:10.596870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.030 [2024-07-15 10:38:10.606144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.030 [2024-07-15 10:38:10.606572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.030 [2024-07-15 10:38:10.606603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.030 [2024-07-15 10:38:10.606620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.030 [2024-07-15 10:38:10.606856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.030 [2024-07-15 10:38:10.607108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.030 [2024-07-15 10:38:10.607131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.031 [2024-07-15 10:38:10.607146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.031 [2024-07-15 10:38:10.610711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.031 [2024-07-15 10:38:10.619979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.031 [2024-07-15 10:38:10.620403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.031 [2024-07-15 10:38:10.620434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.031 [2024-07-15 10:38:10.620451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.031 [2024-07-15 10:38:10.620688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.031 [2024-07-15 10:38:10.620947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.031 [2024-07-15 10:38:10.620970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.031 [2024-07-15 10:38:10.620985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.031 [2024-07-15 10:38:10.624549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.031 [2024-07-15 10:38:10.633809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.031 [2024-07-15 10:38:10.634226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.031 [2024-07-15 10:38:10.634256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.031 [2024-07-15 10:38:10.634273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.031 [2024-07-15 10:38:10.634509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.031 [2024-07-15 10:38:10.634750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.031 [2024-07-15 10:38:10.634773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.031 [2024-07-15 10:38:10.634787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.031 [2024-07-15 10:38:10.638362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.031 [2024-07-15 10:38:10.647659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.031 [2024-07-15 10:38:10.648067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.031 [2024-07-15 10:38:10.648098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.031 [2024-07-15 10:38:10.648115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.031 [2024-07-15 10:38:10.648352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.031 [2024-07-15 10:38:10.648594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.031 [2024-07-15 10:38:10.648616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.031 [2024-07-15 10:38:10.648631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2413350 Killed "${NVMF_APP[@]}" "$@" 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:16.031 [2024-07-15 10:38:10.652210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2414310 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2414310 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2414310 ']' 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.031 10:38:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:16.031 [2024-07-15 10:38:10.661691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.031 [2024-07-15 10:38:10.662078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.031 [2024-07-15 10:38:10.662107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.031 [2024-07-15 10:38:10.662123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.031 [2024-07-15 10:38:10.662360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.031 [2024-07-15 10:38:10.662601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.031 [2024-07-15 10:38:10.662624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.031 [2024-07-15 10:38:10.662639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.031 [2024-07-15 10:38:10.666213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.031 [2024-07-15 10:38:10.675697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.031 [2024-07-15 10:38:10.676131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.031 [2024-07-15 10:38:10.676163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.031 [2024-07-15 10:38:10.676180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.031 [2024-07-15 10:38:10.676417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.031 [2024-07-15 10:38:10.676659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.031 [2024-07-15 10:38:10.676682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.031 [2024-07-15 10:38:10.676696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.292 [2024-07-15 10:38:10.680273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.292 [2024-07-15 10:38:10.689538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.292 [2024-07-15 10:38:10.689966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.292 [2024-07-15 10:38:10.689997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.292 [2024-07-15 10:38:10.690015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.292 [2024-07-15 10:38:10.690252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.292 [2024-07-15 10:38:10.690493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.292 [2024-07-15 10:38:10.690516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.292 [2024-07-15 10:38:10.690531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.292 [2024-07-15 10:38:10.694109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.292 [2024-07-15 10:38:10.703388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.292 [2024-07-15 10:38:10.703799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.292 [2024-07-15 10:38:10.703830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.292 [2024-07-15 10:38:10.703847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.292 [2024-07-15 10:38:10.704093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.292 [2024-07-15 10:38:10.704335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.292 [2024-07-15 10:38:10.704358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.292 [2024-07-15 10:38:10.704373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.292 [2024-07-15 10:38:10.707947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.292 [2024-07-15 10:38:10.708324] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:16.292 [2024-07-15 10:38:10.708393] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.292 [2024-07-15 10:38:10.717368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.292 [2024-07-15 10:38:10.717724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.292 [2024-07-15 10:38:10.717750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.292 [2024-07-15 10:38:10.717765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.292 [2024-07-15 10:38:10.718001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.292 [2024-07-15 10:38:10.718246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.292 [2024-07-15 10:38:10.718265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.292 [2024-07-15 10:38:10.718278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.292 [2024-07-15 10:38:10.721591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.292 [2024-07-15 10:38:10.730945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.292 [2024-07-15 10:38:10.731319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.731347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.731363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.731592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.731812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.731831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.731843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.293 [2024-07-15 10:38:10.735027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.293 [2024-07-15 10:38:10.744300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.293 [2024-07-15 10:38:10.744735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.744762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.744777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.745014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.745242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.745261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.745273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.293 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.293 [2024-07-15 10:38:10.748349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.293 [2024-07-15 10:38:10.758117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.293 [2024-07-15 10:38:10.758525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.758555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.758573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.758810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.759059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.759080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.759094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.293 [2024-07-15 10:38:10.762651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.293 [2024-07-15 10:38:10.772166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.293 [2024-07-15 10:38:10.772667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.772695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.772710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.772975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.773215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.773239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.773254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.293 [2024-07-15 10:38:10.776761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.293 [2024-07-15 10:38:10.779785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:16.293 [2024-07-15 10:38:10.786101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.293 [2024-07-15 10:38:10.786619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.786650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.786668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.786951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.787185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.787208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.787225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.293 [2024-07-15 10:38:10.790732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.293 [2024-07-15 10:38:10.800087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.293 [2024-07-15 10:38:10.800672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.800706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.800724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.801001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.801230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.801254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.801270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.293 [2024-07-15 10:38:10.804778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.293 [2024-07-15 10:38:10.814071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.293 [2024-07-15 10:38:10.814595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.814623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.814639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.814912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.815124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.815145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.815173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.293 [2024-07-15 10:38:10.818678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.293 [2024-07-15 10:38:10.828010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.293 [2024-07-15 10:38:10.828454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.828484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.828501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.828738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.829000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.829021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.829044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.293 [2024-07-15 10:38:10.832535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.293 [2024-07-15 10:38:10.841970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.293 [2024-07-15 10:38:10.842478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.842513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.842532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.842775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.843027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.843048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.843062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.293 [2024-07-15 10:38:10.846570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.293 [2024-07-15 10:38:10.855901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.293 [2024-07-15 10:38:10.856460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.856498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.856519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.856762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.857015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.857036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.857051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.293 [2024-07-15 10:38:10.860549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.293 [2024-07-15 10:38:10.869859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.293 [2024-07-15 10:38:10.870313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.293 [2024-07-15 10:38:10.870345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.293 [2024-07-15 10:38:10.870363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.293 [2024-07-15 10:38:10.870600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.293 [2024-07-15 10:38:10.870842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.293 [2024-07-15 10:38:10.870865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.293 [2024-07-15 10:38:10.870891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.294 [2024-07-15 10:38:10.874396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 [2024-07-15 10:38:10.883689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.294 [2024-07-15 10:38:10.884123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-07-15 10:38:10.884151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-07-15 10:38:10.884167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.294 [2024-07-15 10:38:10.884413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.294 [2024-07-15 10:38:10.884666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.294 [2024-07-15 10:38:10.884689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.294 [2024-07-15 10:38:10.884704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.294 [2024-07-15 10:38:10.888224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 [2024-07-15 10:38:10.896325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.294 [2024-07-15 10:38:10.896360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.294 [2024-07-15 10:38:10.896376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.294 [2024-07-15 10:38:10.896388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.294 [2024-07-15 10:38:10.896399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.294 [2024-07-15 10:38:10.896478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.294 [2024-07-15 10:38:10.896597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.294 [2024-07-15 10:38:10.896600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.294 [2024-07-15 10:38:10.897374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.294 [2024-07-15 10:38:10.897752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-07-15 10:38:10.897780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-07-15 10:38:10.897796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.294 [2024-07-15 10:38:10.898023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.294 [2024-07-15 10:38:10.898255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.294 [2024-07-15 10:38:10.898275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.294 [2024-07-15 10:38:10.898289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.294 [2024-07-15 10:38:10.901506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 [2024-07-15 10:38:10.910935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.294 [2024-07-15 10:38:10.911541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-07-15 10:38:10.911581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-07-15 10:38:10.911600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.294 [2024-07-15 10:38:10.911824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.294 [2024-07-15 10:38:10.912056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.294 [2024-07-15 10:38:10.912078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.294 [2024-07-15 10:38:10.912106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.294 [2024-07-15 10:38:10.915406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 [2024-07-15 10:38:10.924653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.294 [2024-07-15 10:38:10.925254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-07-15 10:38:10.925294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-07-15 10:38:10.925314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.294 [2024-07-15 10:38:10.925549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.294 [2024-07-15 10:38:10.925782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.294 [2024-07-15 10:38:10.925803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.294 [2024-07-15 10:38:10.925820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.294 [2024-07-15 10:38:10.929137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.294 [2024-07-15 10:38:10.938447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.294 [2024-07-15 10:38:10.938945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.294 [2024-07-15 10:38:10.938983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.294 [2024-07-15 10:38:10.939003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.294 [2024-07-15 10:38:10.939241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.294 [2024-07-15 10:38:10.939456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.294 [2024-07-15 10:38:10.939477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.294 [2024-07-15 10:38:10.939494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:10.942771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:10.951938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:10.952389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:10.952425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:10.952444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.554 [2024-07-15 10:38:10.952681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.554 [2024-07-15 10:38:10.952904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.554 [2024-07-15 10:38:10.952925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.554 [2024-07-15 10:38:10.952940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:10.956130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:10.965380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:10.965972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:10.966011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:10.966030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.554 [2024-07-15 10:38:10.966267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.554 [2024-07-15 10:38:10.966482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.554 [2024-07-15 10:38:10.966502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.554 [2024-07-15 10:38:10.966518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:10.969717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:10.978940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:10.979364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:10.979394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:10.979410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.554 [2024-07-15 10:38:10.979642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.554 [2024-07-15 10:38:10.979853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.554 [2024-07-15 10:38:10.979873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.554 [2024-07-15 10:38:10.979913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:10.983079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:10.992428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:10.992817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:10.992845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:10.992860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.554 [2024-07-15 10:38:10.993081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.554 [2024-07-15 10:38:10.993311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.554 [2024-07-15 10:38:10.993331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.554 [2024-07-15 10:38:10.993344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:10.996499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:11.005901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:11.006285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:11.006312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:11.006328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.554 [2024-07-15 10:38:11.006556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.554 [2024-07-15 10:38:11.006776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.554 [2024-07-15 10:38:11.006796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.554 [2024-07-15 10:38:11.006809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:11.010018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:11.019361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:11.019769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:11.019797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:11.019812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.554 [2024-07-15 10:38:11.020034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.554 [2024-07-15 10:38:11.020265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.554 [2024-07-15 10:38:11.020286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.554 [2024-07-15 10:38:11.020299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:11.023495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:11.032827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:11.033208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:11.033236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:11.033251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.554 [2024-07-15 10:38:11.033479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.554 [2024-07-15 10:38:11.033689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.554 [2024-07-15 10:38:11.033709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.554 [2024-07-15 10:38:11.033722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:11.036835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:11.046408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:11.046810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:11.046838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:11.046853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.554 [2024-07-15 10:38:11.047074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.554 [2024-07-15 10:38:11.047305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.554 [2024-07-15 10:38:11.047325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.554 [2024-07-15 10:38:11.047338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:11.050536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:11.059888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:11.060263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:11.060291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:11.060306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.554 [2024-07-15 10:38:11.060534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.554 [2024-07-15 10:38:11.060744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.554 [2024-07-15 10:38:11.060764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.554 [2024-07-15 10:38:11.060777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:11.063959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:11.073310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:11.073687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:11.073715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:11.073730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.554 [2024-07-15 10:38:11.073966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.554 [2024-07-15 10:38:11.074178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.554 [2024-07-15 10:38:11.074197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.554 [2024-07-15 10:38:11.074210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.554 [2024-07-15 10:38:11.077405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.554 [2024-07-15 10:38:11.086770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.554 [2024-07-15 10:38:11.087172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.554 [2024-07-15 10:38:11.087200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.554 [2024-07-15 10:38:11.087215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.555 [2024-07-15 10:38:11.087428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.555 [2024-07-15 10:38:11.087655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.555 [2024-07-15 10:38:11.087674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.555 [2024-07-15 10:38:11.087687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.555 [2024-07-15 10:38:11.090955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.555 [2024-07-15 10:38:11.100349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.555 [2024-07-15 10:38:11.100729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.555 [2024-07-15 10:38:11.100757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.555 [2024-07-15 10:38:11.100777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.555 [2024-07-15 10:38:11.100999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.555 [2024-07-15 10:38:11.101232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.555 [2024-07-15 10:38:11.101253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.555 [2024-07-15 10:38:11.101266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.555 [2024-07-15 10:38:11.104466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.555 [2024-07-15 10:38:11.113842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.555 [2024-07-15 10:38:11.114266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.555 [2024-07-15 10:38:11.114295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.555 [2024-07-15 10:38:11.114311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.555 [2024-07-15 10:38:11.114524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.555 [2024-07-15 10:38:11.114757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.555 [2024-07-15 10:38:11.114777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.555 [2024-07-15 10:38:11.114790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.555 [2024-07-15 10:38:11.117998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.555 [2024-07-15 10:38:11.127353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.555 [2024-07-15 10:38:11.127741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.555 [2024-07-15 10:38:11.127768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.555 [2024-07-15 10:38:11.127783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.555 [2024-07-15 10:38:11.128004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.555 [2024-07-15 10:38:11.128236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.555 [2024-07-15 10:38:11.128257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.555 [2024-07-15 10:38:11.128269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.555 [2024-07-15 10:38:11.131513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.555 [2024-07-15 10:38:11.140894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.555 [2024-07-15 10:38:11.141266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.555 [2024-07-15 10:38:11.141294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.555 [2024-07-15 10:38:11.141309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.555 [2024-07-15 10:38:11.141522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.555 [2024-07-15 10:38:11.141749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.555 [2024-07-15 10:38:11.141774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.555 [2024-07-15 10:38:11.141788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.555 [2024-07-15 10:38:11.144972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.555 [2024-07-15 10:38:11.154352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.555 [2024-07-15 10:38:11.154766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.555 [2024-07-15 10:38:11.154794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.555 [2024-07-15 10:38:11.154810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.555 [2024-07-15 10:38:11.155034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.555 [2024-07-15 10:38:11.155269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.555 [2024-07-15 10:38:11.155289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.555 [2024-07-15 10:38:11.155302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.555 [2024-07-15 10:38:11.158504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.555 [2024-07-15 10:38:11.167887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.555 [2024-07-15 10:38:11.168277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.555 [2024-07-15 10:38:11.168305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.555 [2024-07-15 10:38:11.168320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.555 [2024-07-15 10:38:11.168548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.555 [2024-07-15 10:38:11.168758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.555 [2024-07-15 10:38:11.168778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.555 [2024-07-15 10:38:11.168791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.555 [2024-07-15 10:38:11.171976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.555 [2024-07-15 10:38:11.181346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.555 [2024-07-15 10:38:11.181709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.555 [2024-07-15 10:38:11.181737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.555 [2024-07-15 10:38:11.181752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.555 [2024-07-15 10:38:11.181976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.555 [2024-07-15 10:38:11.182194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.555 [2024-07-15 10:38:11.182215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.555 [2024-07-15 10:38:11.182228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.555 [2024-07-15 10:38:11.185495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.555 [2024-07-15 10:38:11.194901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.555 [2024-07-15 10:38:11.195268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.555 [2024-07-15 10:38:11.195296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.555 [2024-07-15 10:38:11.195311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.555 [2024-07-15 10:38:11.195540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.555 [2024-07-15 10:38:11.195750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.555 [2024-07-15 10:38:11.195776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.555 [2024-07-15 10:38:11.195789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.555 [2024-07-15 10:38:11.199085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.814 [2024-07-15 10:38:11.208602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.814 [2024-07-15 10:38:11.209039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.814 [2024-07-15 10:38:11.209067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.814 [2024-07-15 10:38:11.209082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.814 [2024-07-15 10:38:11.209296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.814 [2024-07-15 10:38:11.209523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.814 [2024-07-15 10:38:11.209543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.814 [2024-07-15 10:38:11.209555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.814 [2024-07-15 10:38:11.212764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.814 [2024-07-15 10:38:11.222147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.814 [2024-07-15 10:38:11.222519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.814 [2024-07-15 10:38:11.222547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.814 [2024-07-15 10:38:11.222562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.814 [2024-07-15 10:38:11.222790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.814 [2024-07-15 10:38:11.223008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.814 [2024-07-15 10:38:11.223029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.814 [2024-07-15 10:38:11.223042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.814 [2024-07-15 10:38:11.226239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.814 [2024-07-15 10:38:11.235590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.814 [2024-07-15 10:38:11.235963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.814 [2024-07-15 10:38:11.235991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.814 [2024-07-15 10:38:11.236006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.814 [2024-07-15 10:38:11.236225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.814 [2024-07-15 10:38:11.236451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.814 [2024-07-15 10:38:11.236471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.814 [2024-07-15 10:38:11.236484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.814 [2024-07-15 10:38:11.239641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.814 [2024-07-15 10:38:11.249033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.814 [2024-07-15 10:38:11.249403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.814 [2024-07-15 10:38:11.249430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.814 [2024-07-15 10:38:11.249446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.814 [2024-07-15 10:38:11.249659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.814 [2024-07-15 10:38:11.249894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.814 [2024-07-15 10:38:11.249914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.814 [2024-07-15 10:38:11.249927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.814 [2024-07-15 10:38:11.253104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.814 [2024-07-15 10:38:11.262443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.814 [2024-07-15 10:38:11.262823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.814 [2024-07-15 10:38:11.262850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.814 [2024-07-15 10:38:11.262866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.814 [2024-07-15 10:38:11.263087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.814 [2024-07-15 10:38:11.263316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.814 [2024-07-15 10:38:11.263336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.814 [2024-07-15 10:38:11.263349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.814 [2024-07-15 10:38:11.266543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.814 [2024-07-15 10:38:11.275846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.814 [2024-07-15 10:38:11.276247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.814 [2024-07-15 10:38:11.276274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.814 [2024-07-15 10:38:11.276290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.814 [2024-07-15 10:38:11.276518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.814 [2024-07-15 10:38:11.276728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.814 [2024-07-15 10:38:11.276748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.814 [2024-07-15 10:38:11.276765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.814 [2024-07-15 10:38:11.279906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.814 [2024-07-15 10:38:11.289450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.814 [2024-07-15 10:38:11.289826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.814 [2024-07-15 10:38:11.289854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.814 [2024-07-15 10:38:11.289869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.814 [2024-07-15 10:38:11.290091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.814 [2024-07-15 10:38:11.290320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.814 [2024-07-15 10:38:11.290340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.814 [2024-07-15 10:38:11.290353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.814 [2024-07-15 10:38:11.293548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.814 [2024-07-15 10:38:11.302904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.814 [2024-07-15 10:38:11.303274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.814 [2024-07-15 10:38:11.303302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.814 [2024-07-15 10:38:11.303317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.814 [2024-07-15 10:38:11.303530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.814 [2024-07-15 10:38:11.303756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.303775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.303788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.306958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.316336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.316714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.316741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.316757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.316993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.317204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.317224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.317237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.320429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.329786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.330183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.330211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.330227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.330441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.330666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.330686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.330698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.333859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.343367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.343756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.343783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.343799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.344020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.344252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.344272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.344285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.347441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.356827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.357257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.357285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.357300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.357513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.357739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.357759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.357772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.360950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.370292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.370671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.370698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.370713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.370951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.371168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.371188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.371201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.374397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.383754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.384117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.384146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.384161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.384390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.384600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.384620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.384633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.387792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.397220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.397636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.397663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.397678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.397917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.398129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.398149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.398162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.401362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.410713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.411137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.411166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.411182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.411395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.411621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.411642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.411655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.414885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.424254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.424649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.424676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.424692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.424915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.425133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.425154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.425168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.428456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.437673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.438050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.438078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.438093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.438307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.438524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.438544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.815 [2024-07-15 10:38:11.438558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.815 [2024-07-15 10:38:11.441896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.815 [2024-07-15 10:38:11.451323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.815 [2024-07-15 10:38:11.451698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.815 [2024-07-15 10:38:11.451725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:16.815 [2024-07-15 10:38:11.451741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:16.815 [2024-07-15 10:38:11.451963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:16.815 [2024-07-15 10:38:11.452197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.815 [2024-07-15 10:38:11.452218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.816 [2024-07-15 10:38:11.452230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.816 [2024-07-15 10:38:11.455504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.075 [2024-07-15 10:38:11.465004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.075 [2024-07-15 10:38:11.465359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.075 [2024-07-15 10:38:11.465387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.075 [2024-07-15 10:38:11.465407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.075 [2024-07-15 10:38:11.465636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.075 [2024-07-15 10:38:11.465847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.075 [2024-07-15 10:38:11.465867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.075 [2024-07-15 10:38:11.465903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.075 [2024-07-15 10:38:11.469222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.075 [2024-07-15 10:38:11.478442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.075 [2024-07-15 10:38:11.478833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.075 [2024-07-15 10:38:11.478861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.075 [2024-07-15 10:38:11.478893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.075 [2024-07-15 10:38:11.479107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.075 [2024-07-15 10:38:11.479335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.075 [2024-07-15 10:38:11.479356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.075 [2024-07-15 10:38:11.479369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.075 [2024-07-15 10:38:11.482537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.075 [2024-07-15 10:38:11.491935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.075 [2024-07-15 10:38:11.492362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.075 [2024-07-15 10:38:11.492390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.075 [2024-07-15 10:38:11.492405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.075 [2024-07-15 10:38:11.492618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.075 [2024-07-15 10:38:11.492843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.075 [2024-07-15 10:38:11.492863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.075 [2024-07-15 10:38:11.492882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.075 [2024-07-15 10:38:11.496061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.075 [2024-07-15 10:38:11.505441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.075 [2024-07-15 10:38:11.505844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.075 [2024-07-15 10:38:11.505886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.075 [2024-07-15 10:38:11.505904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.075 [2024-07-15 10:38:11.506117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.075 [2024-07-15 10:38:11.506347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.075 [2024-07-15 10:38:11.506372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.075 [2024-07-15 10:38:11.506385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.075 [2024-07-15 10:38:11.509543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.075 [2024-07-15 10:38:11.518953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.075 [2024-07-15 10:38:11.519375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.075 [2024-07-15 10:38:11.519403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.075 [2024-07-15 10:38:11.519418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.075 [2024-07-15 10:38:11.519632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.075 [2024-07-15 10:38:11.519858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.075 [2024-07-15 10:38:11.519894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.075 [2024-07-15 10:38:11.519908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.075 [2024-07-15 10:38:11.523066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.075 [2024-07-15 10:38:11.532426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.075 [2024-07-15 10:38:11.532826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.075 [2024-07-15 10:38:11.532853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.075 [2024-07-15 10:38:11.532869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.075 [2024-07-15 10:38:11.533091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.075 [2024-07-15 10:38:11.533320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.075 [2024-07-15 10:38:11.533340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.075 [2024-07-15 10:38:11.533354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.075 [2024-07-15 10:38:11.536546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.075 [2024-07-15 10:38:11.545909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.075 [2024-07-15 10:38:11.546264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.075 [2024-07-15 10:38:11.546292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.075 [2024-07-15 10:38:11.546307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.546535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.076 [2024-07-15 10:38:11.546745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.076 [2024-07-15 10:38:11.546764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.076 [2024-07-15 10:38:11.546777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.076 [2024-07-15 10:38:11.549963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.076 [2024-07-15 10:38:11.559342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.076 [2024-07-15 10:38:11.559726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.076 [2024-07-15 10:38:11.559754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.076 [2024-07-15 10:38:11.559769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.560006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.076 [2024-07-15 10:38:11.560217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.076 [2024-07-15 10:38:11.560238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.076 [2024-07-15 10:38:11.560250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.076 [2024-07-15 10:38:11.563442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.076 [2024-07-15 10:38:11.572789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.076 [2024-07-15 10:38:11.573181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.076 [2024-07-15 10:38:11.573209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.076 [2024-07-15 10:38:11.573224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.573437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.076 [2024-07-15 10:38:11.573664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.076 [2024-07-15 10:38:11.573685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.076 [2024-07-15 10:38:11.573697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.076 [2024-07-15 10:38:11.576857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.076 [2024-07-15 10:38:11.586380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.076 [2024-07-15 10:38:11.586758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.076 [2024-07-15 10:38:11.586786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.076 [2024-07-15 10:38:11.586801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.587024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.076 [2024-07-15 10:38:11.587255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.076 [2024-07-15 10:38:11.587276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.076 [2024-07-15 10:38:11.587289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.076 [2024-07-15 10:38:11.590441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.076 [2024-07-15 10:38:11.599831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.076 [2024-07-15 10:38:11.600236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.076 [2024-07-15 10:38:11.600263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.076 [2024-07-15 10:38:11.600279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.600497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.076 [2024-07-15 10:38:11.600723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.076 [2024-07-15 10:38:11.600743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.076 [2024-07-15 10:38:11.600756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.076 [2024-07-15 10:38:11.603935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.076 [2024-07-15 10:38:11.613276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.076 [2024-07-15 10:38:11.613652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.076 [2024-07-15 10:38:11.613679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.076 [2024-07-15 10:38:11.613694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.613919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.076 [2024-07-15 10:38:11.614137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.076 [2024-07-15 10:38:11.614172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.076 [2024-07-15 10:38:11.614186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.076 [2024-07-15 10:38:11.617382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.076 [2024-07-15 10:38:11.626734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.076 [2024-07-15 10:38:11.627133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.076 [2024-07-15 10:38:11.627161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.076 [2024-07-15 10:38:11.627177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.627390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.076 [2024-07-15 10:38:11.627616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.076 [2024-07-15 10:38:11.627637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.076 [2024-07-15 10:38:11.627649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.076 [2024-07-15 10:38:11.630769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.076 [2024-07-15 10:38:11.640398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.076 [2024-07-15 10:38:11.640805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.076 [2024-07-15 10:38:11.640832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.076 [2024-07-15 10:38:11.640847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.641068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.076 [2024-07-15 10:38:11.641286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.076 [2024-07-15 10:38:11.641307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.076 [2024-07-15 10:38:11.641325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.076 [2024-07-15 10:38:11.644582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.076 [2024-07-15 10:38:11.653962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.076 [2024-07-15 10:38:11.654321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.076 [2024-07-15 10:38:11.654348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.076 [2024-07-15 10:38:11.654363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.654576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.076 [2024-07-15 10:38:11.654793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.076 [2024-07-15 10:38:11.654814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.076 [2024-07-15 10:38:11.654827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.076 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.076 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:17.076 10:38:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:17.076 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:17.076 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:17.076 [2024-07-15 10:38:11.658079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.076 [2024-07-15 10:38:11.667533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.076 [2024-07-15 10:38:11.667920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.076 [2024-07-15 10:38:11.667947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.076 [2024-07-15 10:38:11.667962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.668175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.076 [2024-07-15 10:38:11.668401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.076 [2024-07-15 10:38:11.668421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.076 [2024-07-15 10:38:11.668434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.076 [2024-07-15 10:38:11.671620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.076 10:38:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.076 10:38:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:17.076 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.076 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:17.076 [2024-07-15 10:38:11.681065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.076 [2024-07-15 10:38:11.681448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.076 [2024-07-15 10:38:11.681476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.076 [2024-07-15 10:38:11.681491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.076 [2024-07-15 10:38:11.681727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.077 [2024-07-15 10:38:11.681967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.077 [2024-07-15 10:38:11.681989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.077 [2024-07-15 10:38:11.682002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.077 [2024-07-15 10:38:11.685267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.077 [2024-07-15 10:38:11.686384] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.077 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.077 10:38:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:17.077 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.077 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:17.077 [2024-07-15 10:38:11.694646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.077 [2024-07-15 10:38:11.695043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.077 [2024-07-15 10:38:11.695070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.077 [2024-07-15 10:38:11.695086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.077 [2024-07-15 10:38:11.695299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.077 [2024-07-15 10:38:11.695547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.077 [2024-07-15 10:38:11.695568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.077 [2024-07-15 10:38:11.695581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.077 [2024-07-15 10:38:11.698938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.077 [2024-07-15 10:38:11.708193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.077 [2024-07-15 10:38:11.708652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.077 [2024-07-15 10:38:11.708679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.077 [2024-07-15 10:38:11.708694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.077 [2024-07-15 10:38:11.708946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.077 [2024-07-15 10:38:11.709164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.077 [2024-07-15 10:38:11.709185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.077 [2024-07-15 10:38:11.709198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.077 [2024-07-15 10:38:11.712463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.077 [2024-07-15 10:38:11.721794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.077 [2024-07-15 10:38:11.722318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.077 [2024-07-15 10:38:11.722362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.077 [2024-07-15 10:38:11.722382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.077 [2024-07-15 10:38:11.722632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.077 [2024-07-15 10:38:11.722849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.077 [2024-07-15 10:38:11.722895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.077 [2024-07-15 10:38:11.722914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.335 [2024-07-15 10:38:11.726166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.335 Malloc0 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:17.335 [2024-07-15 10:38:11.735478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.335 [2024-07-15 10:38:11.735917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.335 [2024-07-15 10:38:11.735956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.335 [2024-07-15 10:38:11.735978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.335 [2024-07-15 10:38:11.736215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.335 [2024-07-15 10:38:11.736428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.335 [2024-07-15 10:38:11.736448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.335 [2024-07-15 10:38:11.736463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.335 [2024-07-15 10:38:11.740013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:17.335 [2024-07-15 10:38:11.749152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.335 [2024-07-15 10:38:11.749553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.335 [2024-07-15 10:38:11.749581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1cac0 with addr=10.0.0.2, port=4420 00:25:17.335 [2024-07-15 10:38:11.749597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1cac0 is same with the state(5) to be set 00:25:17.335 [2024-07-15 10:38:11.749811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1cac0 (9): Bad file descriptor 00:25:17.335 [2024-07-15 10:38:11.750066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.335 [2024-07-15 10:38:11.750088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.335 [2024-07-15 10:38:11.750101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.335 [2024-07-15 10:38:11.751587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.335 [2024-07-15 10:38:11.753398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.335 10:38:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2413641 00:25:17.335 [2024-07-15 10:38:11.762678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.335 [2024-07-15 10:38:11.889336] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.299 00:25:27.299 Latency(us) 00:25:27.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.299 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.299 Verification LBA range: start 0x0 length 0x4000 00:25:27.299 Nvme1n1 : 15.00 5962.82 23.29 10645.99 0.00 7682.17 831.34 17379.18 00:25:27.299 =================================================================================================================== 00:25:27.299 Total : 5962.82 23.29 10645.99 0.00 7682.17 831.34 17379.18 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:27.299 rmmod nvme_tcp 00:25:27.299 rmmod nvme_fabrics 00:25:27.299 rmmod nvme_keyring 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2414310 ']' 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2414310 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2414310 ']' 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2414310 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2414310 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2414310' 00:25:27.299 killing process with pid 2414310 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2414310 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2414310 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.299 10:38:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.685 10:38:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.685 00:25:28.685 real 0m22.975s 00:25:28.685 user 1m1.344s 00:25:28.685 sys 0m4.439s 00:25:28.685 10:38:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.685 10:38:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.685 ************************************ 00:25:28.685 END TEST nvmf_bdevperf 00:25:28.685 ************************************ 00:25:28.685 10:38:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:28.685 10:38:22 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:28.685 10:38:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:28.685 10:38:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.685 10:38:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.685 ************************************ 00:25:28.685 START TEST nvmf_target_disconnect 00:25:28.685 ************************************ 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:28.685 * Looking for test storage... 00:25:28.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.685 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.686 10:38:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:30.596 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:30.597 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:30.597 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:30.597 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:30.597 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:30.597 10:38:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:30.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:25:30.598 00:25:30.598 --- 10.0.0.2 ping statistics --- 00:25:30.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.598 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:25:30.598 00:25:30.598 --- 10.0.0.1 ping statistics --- 00:25:30.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.598 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:30.598 ************************************ 00:25:30.598 START TEST nvmf_target_disconnect_tc1 00:25:30.598 ************************************ 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:30.598 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:30.598 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.891 [2024-07-15 10:38:25.267003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.891 [2024-07-15 10:38:25.267077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c11a0 with addr=10.0.0.2, port=4420 00:25:30.891 [2024-07-15 10:38:25.267120] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:30.891 [2024-07-15 10:38:25.267142] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:30.891 [2024-07-15 10:38:25.267157] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:30.891 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:30.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:30.891 Initializing NVMe Controllers 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:30.891 00:25:30.891 real 0m0.098s 00:25:30.891 user 0m0.044s 00:25:30.891 sys 0m0.053s 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:30.891 ************************************ 00:25:30.891 END TEST nvmf_target_disconnect_tc1 00:25:30.891 ************************************ 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:30.891 ************************************ 00:25:30.891 START TEST nvmf_target_disconnect_tc2 00:25:30.891 ************************************ 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2417460 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2417460 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2417460 ']' 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.891 10:38:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:30.891 [2024-07-15 10:38:25.386242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:30.891 [2024-07-15 10:38:25.386323] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.891 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.891 [2024-07-15 10:38:25.456041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.150 [2024-07-15 10:38:25.579424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.150 [2024-07-15 10:38:25.579493] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.150 [2024-07-15 10:38:25.579510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.150 [2024-07-15 10:38:25.579522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.150 [2024-07-15 10:38:25.579534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.150 [2024-07-15 10:38:25.579957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:31.150 [2024-07-15 10:38:25.579982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:31.150 [2024-07-15 10:38:25.580240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:31.150 [2024-07-15 10:38:25.580248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.717 Malloc0 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.717 [2024-07-15 10:38:26.356055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.717 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.975 [2024-07-15 10:38:26.384304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2417614 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:31.975 10:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:31.975 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.887 10:38:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2417460 00:25:33.887 10:38:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 [2024-07-15 10:38:28.408891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Write completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 Read completed with error (sct=0, sc=8) 00:25:33.887 starting I/O failed 00:25:33.887 [2024-07-15 10:38:28.409227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:33.887 [2024-07-15 10:38:28.409542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.887 [2024-07-15 10:38:28.409571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.887 qpair failed and we were unable to recover it. 00:25:33.887 [2024-07-15 10:38:28.409755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.887 [2024-07-15 10:38:28.409781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.887 qpair failed and we were unable to recover it. 00:25:33.887 [2024-07-15 10:38:28.409941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.887 [2024-07-15 10:38:28.409968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.887 qpair failed and we were unable to recover it. 00:25:33.887 [2024-07-15 10:38:28.410102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.887 [2024-07-15 10:38:28.410126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.887 qpair failed and we were unable to recover it. 00:25:33.887 [2024-07-15 10:38:28.410280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.887 [2024-07-15 10:38:28.410305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.887 qpair failed and we were unable to recover it. 00:25:33.887 [2024-07-15 10:38:28.410464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.410506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.410675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.410702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.410858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.410892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.411015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.411040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.411183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.411210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.411396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.411421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.411573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.411605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.411766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.411791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.411916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.411942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.412099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.412123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.412263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.412288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.412498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.412523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.412682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.412706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.412859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.412891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.413025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.413050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.413200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.413224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.413410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.413452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.413595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.413620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.413751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.413775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.413962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.413989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Write completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Write completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Write completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Write completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Write completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Write completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Read completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 Write completed with error (sct=0, sc=8) 00:25:33.888 starting I/O failed 00:25:33.888 [2024-07-15 10:38:28.414300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:33.888 [2024-07-15 10:38:28.414488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.414527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.414710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.414737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.414891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.414918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.415070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.415096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.415248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.415274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.415425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.415451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.415697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.415722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.415853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.415885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.416015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.416040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.888 [2024-07-15 10:38:28.416157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.888 [2024-07-15 10:38:28.416182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.888 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.416333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.416360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.416503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.416529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.416652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.416679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.416887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.416932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.417054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.417080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.417231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.417257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.417408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.417434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.417663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.417688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.417840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.417865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.418020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.418045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.418173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.418198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.418347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.418373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.418550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.418575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.418699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.418726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.418924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.418952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.419072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.419097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.419242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.419267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.419394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.419419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.419591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.419616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.419766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.419793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.419955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.419982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.420108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.420134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.420313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.420339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.420485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.420518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.420696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.420722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.420847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.420873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.421042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.421068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.421194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.421220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.421396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.421421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 [2024-07-15 10:38:28.421566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.889 [2024-07-15 10:38:28.421593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.889 qpair failed and we were unable to recover it. 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Read completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.889 Write completed with error (sct=0, sc=8) 00:25:33.889 starting I/O failed 00:25:33.890 [2024-07-15 10:38:28.421952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:33.890 [2024-07-15 10:38:28.422104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.422143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.422268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.422293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.422442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.422467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.422704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.422729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.422901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.422927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.423078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.423103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.423281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.423306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.423477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.423502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.423614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.423639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.423812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.423837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.423970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.423995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.424118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.424143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.424351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.424375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.424556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.424620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.424761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.424790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.424987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.425013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.425159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.425184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.425327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.425355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.425502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.425526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.425667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.425692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.425915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.425955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.426129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.426185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.426370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.426397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.426543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.426568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.426722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.426747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.426865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.426899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.427024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.427049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.427176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.427200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.427372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.427397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.427522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.427547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.427717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.427741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.427889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.427915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.428033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.428058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.428212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.428237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.428364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.428390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.428542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.428567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.428691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.428718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.428894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.428920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.429078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.429102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.429247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.429272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.890 [2024-07-15 10:38:28.429387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.890 [2024-07-15 10:38:28.429420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.890 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.429599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.429624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.429772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.429813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.429988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.430014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.430165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.430190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.430365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.430389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.430535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.430560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.430679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.430704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.430826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.430851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.431036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.431061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.431201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.431226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.431346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.431371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.431516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.431541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.431662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.431688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.431900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.431939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.432061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.432087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.432239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.432264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.432386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.432411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.432628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.432653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.432821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.432849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.433010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.433036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.433181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.433207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.433333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.433360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.433502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.433531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.433726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.433751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.433924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.433951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.434097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.434122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.434282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.434308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.434451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.434476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.434626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.434654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.434770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.434795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.434941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.434967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.435086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.435113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.891 [2024-07-15 10:38:28.435237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.891 [2024-07-15 10:38:28.435262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.891 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.435410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.435434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.435584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.435609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.435759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.435784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.435933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.435958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.436107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.436132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.436285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.436310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.436436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.436465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.436648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.436676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.436829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.436855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.436975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.437001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.437130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.437155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.437331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.437357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.437502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.437528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.437657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.437682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.437860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.437897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.438068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.438093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.438270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.438312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.438461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.438488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.438642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.438667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.438820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.438848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.439008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.439034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.439206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.439234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.439489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.439540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.439710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.439734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.439889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.439915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.440061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.440086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.440244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.440270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.440446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.440471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.440620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.440644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.440830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.440855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.441014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.441041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.441163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.441188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.441310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.441335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.441496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.441521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.441687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.441717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.441853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.441885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.442036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.442062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.442261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.442289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.442460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.442485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.442610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.442635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.892 qpair failed and we were unable to recover it. 00:25:33.892 [2024-07-15 10:38:28.442808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.892 [2024-07-15 10:38:28.442835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.442986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.443012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.443159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.443183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.443370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.443395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.443540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.443565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.443699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.443726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.443901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.443931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.444082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.444107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.444261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.444286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.444409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.444434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.444586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.444610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.444783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.444811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.444980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.445007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.445151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.445176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.445319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.445344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.445521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.445561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.445731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.445756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.445871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.445901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.446050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.446076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.446196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.446220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.446388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.446416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.446576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.446605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.446767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.446796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.446978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.447004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.447134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.447159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.447279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.447303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.447474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.447498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.447612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.447636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.447781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.447808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.447933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.447959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.448101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.448126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.448245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.448270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.448389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.448414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.448615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.448642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.448806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.448831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.448989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.449015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.449142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.449167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.449287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.449313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.449453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.449478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.449675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.449702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.449848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.449872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.893 qpair failed and we were unable to recover it. 00:25:33.893 [2024-07-15 10:38:28.449997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.893 [2024-07-15 10:38:28.450021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.450206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.450231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.450385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.450410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.450557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.450581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.450715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.450754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.450938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.450972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.451126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.451151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.451398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.451449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.451597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.451623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.451817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.451846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.451993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.452022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.452188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.452214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.452376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.452404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.452611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.452636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.452764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.452790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.452910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.452937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.453066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.453091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.453201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.453227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.453399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.453441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.453651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.453676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.453824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.453849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.454001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.454027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.454173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.454199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.454374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.454400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.454520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.454563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.454730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.454758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.454901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.454928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.455083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.455109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.455244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.455287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.455488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.455514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.455635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.455660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.455814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.455838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.455986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.456010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.456158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.456182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.456332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.456371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.456538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.456562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.456729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.456753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.456888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.456915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.457068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.457093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.457249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.457275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.894 [2024-07-15 10:38:28.457433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.894 [2024-07-15 10:38:28.457458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.894 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.457602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.457626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.457776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.457800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.457922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.457947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.458070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.458096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.458224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.458253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.458379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.458404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.458530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.458557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.458733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.458759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.458908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.458951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.459118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.459143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.459263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.459289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.459470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.459497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.459623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.459648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.459768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.459794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.459981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.460007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.460161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.460187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.460308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.460335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.460483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.460508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.460645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.460670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.460792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.460817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.461000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.461027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.461241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.461268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.461390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.461416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.461539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.461564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.461703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.461729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.461884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.461910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.462056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.462082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.462227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.462253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.462404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.462429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.462635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.462661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.462805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.462831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.462992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.463018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.463167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.463194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.463315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.463340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.895 [2024-07-15 10:38:28.463471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.895 [2024-07-15 10:38:28.463497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.895 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.463655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.463680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.463801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.463827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.464011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.464038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.464187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.464212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.464338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.464363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.464482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.464508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.464684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.464712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.464866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.464919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.465101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.465126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.465301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.465334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.465511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.465537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.465684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.465710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.465828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.465853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.466001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.466027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.466176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.466202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.466348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.466374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.466548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.466573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.466726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.466751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.466900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.466927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.467041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.467067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.467219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.467244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.467397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.467422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.467541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.467567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.467717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.467742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.467865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.467897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.468024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.468049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.468176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.468203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.468377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.468405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.468547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.468572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.468750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.468779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.468929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.468957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.469087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.469112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.896 qpair failed and we were unable to recover it. 00:25:33.896 [2024-07-15 10:38:28.469262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.896 [2024-07-15 10:38:28.469288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.469460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.469486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.469608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.469634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.469812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.469838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.470009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.470039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.470190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.470215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.470360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.470384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.470543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.470570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.470708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.470733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.470852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.470891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.471029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.471054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.471189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.471215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.471365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.471390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.471516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.471541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.471674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.897 [2024-07-15 10:38:28.471704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.897 qpair failed and we were unable to recover it. 00:25:33.897 [2024-07-15 10:38:28.471833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.471888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.472028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.472056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.472238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.472269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.472397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.472432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.472591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.472617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.472740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.472765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.472895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.472921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.473088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.473128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.473287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.473318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.473447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.473474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.473648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.473673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.473829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.473863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.474049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.474075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.898 [2024-07-15 10:38:28.474215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.898 [2024-07-15 10:38:28.474257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.898 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.474380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.474406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.474563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.474604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.474762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.474806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.474983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.475011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.475139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.475164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.475287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.475312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.475436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.475462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.475637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.475662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.475809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.475837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.475980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.476006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.476136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.476161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.476286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.476311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.476430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.476456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.476630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.476656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.476807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.476832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.477000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.477039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.477200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.477227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.477378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.899 [2024-07-15 10:38:28.477404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.899 qpair failed and we were unable to recover it. 00:25:33.899 [2024-07-15 10:38:28.477549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.477595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.477743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.477769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.477920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.477946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.478065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.478090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.478245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.478270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.478448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.478490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.478640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.478667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.478803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.478841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.479011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.479037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.479157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.479185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.479312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.479342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.479467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.479493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.479634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.479662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.479783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.479811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.479957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.900 [2024-07-15 10:38:28.479983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.900 qpair failed and we were unable to recover it. 00:25:33.900 [2024-07-15 10:38:28.480135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.901 [2024-07-15 10:38:28.480161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.901 qpair failed and we were unable to recover it. 00:25:33.901 [2024-07-15 10:38:28.480277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.901 [2024-07-15 10:38:28.480304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.901 qpair failed and we were unable to recover it. 00:25:33.901 [2024-07-15 10:38:28.480496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.901 [2024-07-15 10:38:28.480521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.901 qpair failed and we were unable to recover it. 00:25:33.901 [2024-07-15 10:38:28.480671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.901 [2024-07-15 10:38:28.480696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.901 qpair failed and we were unable to recover it. 00:25:33.901 [2024-07-15 10:38:28.480821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.901 [2024-07-15 10:38:28.480846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.901 qpair failed and we were unable to recover it. 00:25:33.901 [2024-07-15 10:38:28.481006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.901 [2024-07-15 10:38:28.481035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.901 qpair failed and we were unable to recover it. 00:25:33.901 [2024-07-15 10:38:28.481225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.901 [2024-07-15 10:38:28.481270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.901 qpair failed and we were unable to recover it. 00:25:33.901 [2024-07-15 10:38:28.481383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.901 [2024-07-15 10:38:28.481409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.901 qpair failed and we were unable to recover it. 00:25:33.901 [2024-07-15 10:38:28.481534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.901 [2024-07-15 10:38:28.481559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.901 qpair failed and we were unable to recover it. 00:25:33.901 [2024-07-15 10:38:28.481715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.901 [2024-07-15 10:38:28.481742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.901 qpair failed and we were unable to recover it. 00:25:33.901 [2024-07-15 10:38:28.481919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.481945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.482100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.482126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.482279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.482324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.482529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.482579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.482779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.482807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.483013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.483042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.483177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.483205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.483382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.483407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.483559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.483584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.483735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.483763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.483931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.483957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.484080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.484105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.484276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.484306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.902 [2024-07-15 10:38:28.484500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.902 [2024-07-15 10:38:28.484556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.902 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.484733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.484758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.484891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.484918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.485044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.485070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.485188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.485213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.485390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.485415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.485541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.485567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.485711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.485741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.485914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.485969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.486106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.486134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.486289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.486315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.486437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.486464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.486613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.486641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.486823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.486849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.486981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.487007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.487155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.487180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.487372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.487430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.487697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.903 [2024-07-15 10:38:28.487748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.903 qpair failed and we were unable to recover it. 00:25:33.903 [2024-07-15 10:38:28.487908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.487952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.488078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.488102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.488252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.488276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.488394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.488419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.488546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.488570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.488714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.488738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.488882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.488925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.489051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.489076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.489225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.489254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.489430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.489455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.489628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.904 [2024-07-15 10:38:28.489653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.904 qpair failed and we were unable to recover it. 00:25:33.904 [2024-07-15 10:38:28.489772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.489797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.489956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.489981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.490097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.490121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.490270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.490295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.490469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.490494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.490695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.490722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.490847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.490880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.491058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.491082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.491226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.491251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.491399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.491424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.491680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.491718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.491897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.491938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.492116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.492160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.905 qpair failed and we were unable to recover it. 00:25:33.905 [2024-07-15 10:38:28.492396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.905 [2024-07-15 10:38:28.492440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.906 qpair failed and we were unable to recover it. 00:25:33.906 [2024-07-15 10:38:28.492678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.906 [2024-07-15 10:38:28.492704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.906 qpair failed and we were unable to recover it. 00:25:33.906 [2024-07-15 10:38:28.492853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.906 [2024-07-15 10:38:28.492891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.906 qpair failed and we were unable to recover it. 00:25:33.906 [2024-07-15 10:38:28.493022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.906 [2024-07-15 10:38:28.493048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.906 qpair failed and we were unable to recover it. 00:25:33.906 [2024-07-15 10:38:28.493212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.906 [2024-07-15 10:38:28.493237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.906 qpair failed and we were unable to recover it. 00:25:33.906 [2024-07-15 10:38:28.493435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.906 [2024-07-15 10:38:28.493479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.906 qpair failed and we were unable to recover it. 00:25:33.906 [2024-07-15 10:38:28.493720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.906 [2024-07-15 10:38:28.493772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.906 qpair failed and we were unable to recover it. 00:25:33.906 [2024-07-15 10:38:28.493957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.906 [2024-07-15 10:38:28.493983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.906 qpair failed and we were unable to recover it. 00:25:33.906 [2024-07-15 10:38:28.494105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.906 [2024-07-15 10:38:28.494130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.906 qpair failed and we were unable to recover it. 00:25:33.906 [2024-07-15 10:38:28.494355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.907 [2024-07-15 10:38:28.494380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.907 qpair failed and we were unable to recover it. 00:25:33.907 [2024-07-15 10:38:28.494621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.907 [2024-07-15 10:38:28.494673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.907 qpair failed and we were unable to recover it. 00:25:33.907 [2024-07-15 10:38:28.494834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.907 [2024-07-15 10:38:28.494866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.907 qpair failed and we were unable to recover it. 00:25:33.907 [2024-07-15 10:38:28.495044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.907 [2024-07-15 10:38:28.495069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.908 qpair failed and we were unable to recover it. 00:25:33.908 [2024-07-15 10:38:28.495191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.908 [2024-07-15 10:38:28.495216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.908 qpair failed and we were unable to recover it. 00:25:33.908 [2024-07-15 10:38:28.495428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.495489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.495657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.495682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.495831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.495855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.496006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.496045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.496199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.496244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.496411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.496439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.496665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.496714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.496900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.496939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.497095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.497122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.497296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.497354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.497599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.497628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.497776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.497801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.497944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.497983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.498109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.498136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.498295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.498323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.498525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.498592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.498784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.498811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.498986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.499012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.499135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.499162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.499330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.499358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.499529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.499554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.499727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.499754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.499928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.499953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.500099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.500124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.500317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.500367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.500592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.500617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.500740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.500765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.909 [2024-07-15 10:38:28.500889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.909 [2024-07-15 10:38:28.500915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.909 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.501040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.501065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.501184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.501209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.501350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.501375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.501584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.501609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.501781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.501806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.501930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.501955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.502105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.502130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.502308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.502333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.502508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.502532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.502682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.502707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.502830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.502855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.503030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.503056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.503238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.503263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.503414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.503439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.503561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.503586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.503757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.503784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.503930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.503955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.504080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.504105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.504240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.504266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.504380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.504406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.504557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.504582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.504701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.504726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.504940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.504966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.505136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.505162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.505317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.505343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.505485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.505510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.505652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.505693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.505856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.505891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.506062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.506087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.506200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.506225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.506370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.506412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.506571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.506598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.506764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.506791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.506986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.507011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.507152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.507195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.507364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.507389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.507530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.507559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.507693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.507724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.507925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.507950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.508093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.508118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.508304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.508344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.508500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.508527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.508682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.508710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.508850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.508885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.509081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.910 [2024-07-15 10:38:28.509106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.910 qpair failed and we were unable to recover it. 00:25:33.910 [2024-07-15 10:38:28.509227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.911 [2024-07-15 10:38:28.509252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.911 qpair failed and we were unable to recover it. 00:25:33.911 [2024-07-15 10:38:28.509369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.911 [2024-07-15 10:38:28.509394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.911 qpair failed and we were unable to recover it. 00:25:33.911 [2024-07-15 10:38:28.509560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.911 [2024-07-15 10:38:28.509588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.911 qpair failed and we were unable to recover it. 00:25:33.911 [2024-07-15 10:38:28.509783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.911 [2024-07-15 10:38:28.509810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.911 qpair failed and we were unable to recover it. 00:25:33.911 [2024-07-15 10:38:28.509949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.911 [2024-07-15 10:38:28.509975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.911 qpair failed and we were unable to recover it. 00:25:33.911 [2024-07-15 10:38:28.510126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.911 [2024-07-15 10:38:28.510151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.911 qpair failed and we were unable to recover it. 00:25:33.911 [2024-07-15 10:38:28.510283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.911 [2024-07-15 10:38:28.510307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.911 qpair failed and we were unable to recover it. 00:25:33.911 [2024-07-15 10:38:28.510445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.911 [2024-07-15 10:38:28.510473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.510630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.510657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.510825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.510853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.511017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.511042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.511164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.511189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.511351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.511378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.511502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.511529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.511722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.511762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.511901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.511927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.512080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.512105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.512252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.512277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.912 qpair failed and we were unable to recover it. 00:25:33.912 [2024-07-15 10:38:28.512442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.912 [2024-07-15 10:38:28.512469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.913 qpair failed and we were unable to recover it. 00:25:33.913 [2024-07-15 10:38:28.512657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.913 [2024-07-15 10:38:28.512684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.913 qpair failed and we were unable to recover it. 00:25:33.913 [2024-07-15 10:38:28.512826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.913 [2024-07-15 10:38:28.512850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.913 qpair failed and we were unable to recover it. 00:25:33.913 [2024-07-15 10:38:28.512974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.913 [2024-07-15 10:38:28.513000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.913 qpair failed and we were unable to recover it. 00:25:33.913 [2024-07-15 10:38:28.513146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.913 [2024-07-15 10:38:28.513172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.913 qpair failed and we were unable to recover it. 00:25:33.913 [2024-07-15 10:38:28.513315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.913 [2024-07-15 10:38:28.513340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.913 qpair failed and we were unable to recover it. 00:25:33.913 [2024-07-15 10:38:28.513487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.913 [2024-07-15 10:38:28.513529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.913 qpair failed and we were unable to recover it. 00:25:33.914 [2024-07-15 10:38:28.513728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.914 [2024-07-15 10:38:28.513753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.914 qpair failed and we were unable to recover it. 00:25:33.914 [2024-07-15 10:38:28.513894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.914 [2024-07-15 10:38:28.513920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.914 qpair failed and we were unable to recover it. 00:25:33.914 [2024-07-15 10:38:28.514044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.914 [2024-07-15 10:38:28.514070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.914 qpair failed and we were unable to recover it. 00:25:33.914 [2024-07-15 10:38:28.514185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.914 [2024-07-15 10:38:28.514210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.914 qpair failed and we were unable to recover it. 00:25:33.914 [2024-07-15 10:38:28.514382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.914 [2024-07-15 10:38:28.514406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.914 qpair failed and we were unable to recover it. 00:25:33.914 [2024-07-15 10:38:28.514550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.914 [2024-07-15 10:38:28.514590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.914 qpair failed and we were unable to recover it. 00:25:33.914 [2024-07-15 10:38:28.514777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.915 [2024-07-15 10:38:28.514804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.915 qpair failed and we were unable to recover it. 00:25:33.915 [2024-07-15 10:38:28.514963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.915 [2024-07-15 10:38:28.514989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.915 qpair failed and we were unable to recover it. 00:25:33.915 [2024-07-15 10:38:28.515155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.915 [2024-07-15 10:38:28.515186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.915 qpair failed and we were unable to recover it. 00:25:33.915 [2024-07-15 10:38:28.515346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.915 [2024-07-15 10:38:28.515375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.915 qpair failed and we were unable to recover it. 00:25:33.915 [2024-07-15 10:38:28.515519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.915 [2024-07-15 10:38:28.515544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.915 qpair failed and we were unable to recover it. 00:25:33.915 [2024-07-15 10:38:28.515660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.915 [2024-07-15 10:38:28.515684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.915 qpair failed and we were unable to recover it. 00:25:33.915 [2024-07-15 10:38:28.515801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.915 [2024-07-15 10:38:28.515825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.915 qpair failed and we were unable to recover it. 00:25:33.915 [2024-07-15 10:38:28.515978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.915 [2024-07-15 10:38:28.516003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.915 qpair failed and we were unable to recover it. 00:25:33.915 [2024-07-15 10:38:28.516197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.915 [2024-07-15 10:38:28.516224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.915 qpair failed and we were unable to recover it. 00:25:33.916 [2024-07-15 10:38:28.516387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.916 [2024-07-15 10:38:28.516415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.916 qpair failed and we were unable to recover it. 00:25:33.916 [2024-07-15 10:38:28.516554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.916 [2024-07-15 10:38:28.516579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.916 qpair failed and we were unable to recover it. 00:25:33.916 [2024-07-15 10:38:28.516702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.916 [2024-07-15 10:38:28.516727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.916 qpair failed and we were unable to recover it. 00:25:33.916 [2024-07-15 10:38:28.516837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.916 [2024-07-15 10:38:28.516861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.916 qpair failed and we were unable to recover it. 00:25:33.916 [2024-07-15 10:38:28.517027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.916 [2024-07-15 10:38:28.517052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.916 qpair failed and we were unable to recover it. 00:25:33.916 [2024-07-15 10:38:28.517196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.916 [2024-07-15 10:38:28.517237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.916 qpair failed and we were unable to recover it. 00:25:33.916 [2024-07-15 10:38:28.517393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.917 [2024-07-15 10:38:28.517421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.917 qpair failed and we were unable to recover it. 00:25:33.917 [2024-07-15 10:38:28.517620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.917 [2024-07-15 10:38:28.517646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.917 qpair failed and we were unable to recover it. 00:25:33.917 [2024-07-15 10:38:28.517815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.917 [2024-07-15 10:38:28.517840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.917 qpair failed and we were unable to recover it. 00:25:33.917 [2024-07-15 10:38:28.518035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.917 [2024-07-15 10:38:28.518064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.917 qpair failed and we were unable to recover it. 00:25:33.917 [2024-07-15 10:38:28.518253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.917 [2024-07-15 10:38:28.518278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.917 qpair failed and we were unable to recover it. 00:25:33.917 [2024-07-15 10:38:28.518394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.917 [2024-07-15 10:38:28.518418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.918 qpair failed and we were unable to recover it. 00:25:33.918 [2024-07-15 10:38:28.518566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.918 [2024-07-15 10:38:28.518593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.918 qpair failed and we were unable to recover it. 00:25:33.918 [2024-07-15 10:38:28.518785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.918 [2024-07-15 10:38:28.518810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.918 qpair failed and we were unable to recover it. 00:25:33.918 [2024-07-15 10:38:28.518981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.918 [2024-07-15 10:38:28.519007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.918 qpair failed and we were unable to recover it. 00:25:33.918 [2024-07-15 10:38:28.519144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.918 [2024-07-15 10:38:28.519185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.918 qpair failed and we were unable to recover it. 00:25:33.918 [2024-07-15 10:38:28.519353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.918 [2024-07-15 10:38:28.519377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.918 qpair failed and we were unable to recover it. 00:25:33.918 [2024-07-15 10:38:28.519546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.519574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.919 qpair failed and we were unable to recover it. 00:25:33.919 [2024-07-15 10:38:28.519707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.519735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.919 qpair failed and we were unable to recover it. 00:25:33.919 [2024-07-15 10:38:28.519916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.519941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.919 qpair failed and we were unable to recover it. 00:25:33.919 [2024-07-15 10:38:28.520092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.520122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.919 qpair failed and we were unable to recover it. 00:25:33.919 [2024-07-15 10:38:28.520238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.520263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.919 qpair failed and we were unable to recover it. 00:25:33.919 [2024-07-15 10:38:28.520416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.520441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.919 qpair failed and we were unable to recover it. 00:25:33.919 [2024-07-15 10:38:28.520627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.520655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.919 qpair failed and we were unable to recover it. 00:25:33.919 [2024-07-15 10:38:28.520816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.520844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.919 qpair failed and we were unable to recover it. 00:25:33.919 [2024-07-15 10:38:28.520997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.521022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.919 qpair failed and we were unable to recover it. 00:25:33.919 [2024-07-15 10:38:28.521150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.521192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.919 qpair failed and we were unable to recover it. 00:25:33.919 [2024-07-15 10:38:28.521321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.919 [2024-07-15 10:38:28.521348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.521513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.521538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.521706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.521733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.521895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.521924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.522121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.522146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.522363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.522388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.522507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.522533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.522680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.522706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.522885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.522911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.523059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.523084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.523209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.523234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.523404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.523429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.523599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.523627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.523805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.523830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.523975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.524000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.524118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.524145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.524334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.524359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.524519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.524547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.524699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.524727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.524861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.524899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.525030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.525071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.525235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.525263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.525452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.525476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.525599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.525623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.525738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.525764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.525893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.525919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.526072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.526097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.920 [2024-07-15 10:38:28.526245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.920 [2024-07-15 10:38:28.526271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.920 qpair failed and we were unable to recover it. 00:25:33.921 [2024-07-15 10:38:28.526415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.921 [2024-07-15 10:38:28.526440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.921 qpair failed and we were unable to recover it. 00:25:33.921 [2024-07-15 10:38:28.526559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.921 [2024-07-15 10:38:28.526583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.922 qpair failed and we were unable to recover it. 00:25:33.922 [2024-07-15 10:38:28.526754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.922 [2024-07-15 10:38:28.526779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:33.922 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.526908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.526934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.527068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.527094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.527222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.527247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.527398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.527427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.527579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.527604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.527753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.527781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.527945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.527971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.528098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.528123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.528301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.528326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.528499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.528524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.528656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.528683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.528847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.528896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.529032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.529059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.529236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.529262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.529418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.529443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.529588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.529613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.529733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.529759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.529913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.210 [2024-07-15 10:38:28.529940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.210 qpair failed and we were unable to recover it. 00:25:34.210 [2024-07-15 10:38:28.530090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.530115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.530260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.530285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.530409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.530434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.530582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.530607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.530752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.530795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.530960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.530989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.531161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.531186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.531357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.531382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.531505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.531530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.531687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.531714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.531865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.531897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.532018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.532044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.532167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.532196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.532324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.532349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.532492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.532517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.532642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.532667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.532793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.532818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.532963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.533002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.533150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.533176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.533305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.533330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.533459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.533484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.533629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.533654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.533799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.533824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.533988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.534014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.534145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.534171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.534291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.534333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.211 [2024-07-15 10:38:28.534507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.211 [2024-07-15 10:38:28.534532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.211 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.534677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.534702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.534870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.534906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.535079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.535104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.535277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.535301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.535497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.535525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.535734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.535759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.535906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.535932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.536046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.536072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.536211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.536239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.536433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.536458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.536637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.536664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.536817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.536844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.536998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.537027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.537197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.537225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.537386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.537411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.537583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.537607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.537752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.537781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.537940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.537981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.538127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.538152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.538303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.538328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.538445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.538469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.538589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.538614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.538759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.538802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.538970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.539010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.539175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.212 [2024-07-15 10:38:28.539202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.212 qpair failed and we were unable to recover it. 00:25:34.212 [2024-07-15 10:38:28.539388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.539416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.539559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.539587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.539763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.539788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.539940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.539966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.540119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.540145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.540360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.540386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.540547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.540575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.540711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.540738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.540904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.540929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.541080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.541104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.541287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.541311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.541485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.541510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.541712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.541739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.541909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.541950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.542101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.542130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.542270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.542298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.542517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.542568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.542724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.542750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.542927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.542953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.543101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.543126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.543340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.543365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.543531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.543558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.543725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.543753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.543921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.543946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.544068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.544092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.213 [2024-07-15 10:38:28.544272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.213 [2024-07-15 10:38:28.544297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.213 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.544425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.544449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.544565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.544590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.544802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.544830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.545015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.545040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.545210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.545238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.545379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.545408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.545585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.545612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.545786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.545814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.545988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.546014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.546162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.546187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.546353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.546381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.546604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.546662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.546801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.546828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.546955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.546981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.547132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.547173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.547330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.547355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.547545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.547573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.547733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.547761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.547931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.547957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.548073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.548100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.548238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.548266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.548431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.548456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.548609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.548634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.548779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.548821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.548971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.548997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.549142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.214 [2024-07-15 10:38:28.549167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.214 qpair failed and we were unable to recover it. 00:25:34.214 [2024-07-15 10:38:28.549341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.549368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.549538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.549564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.549712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.549760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.549936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.549963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.550139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.550164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.550290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.550318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.550453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.550480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.550671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.550696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.550857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.550893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.551036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.551061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.551237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.551262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.551465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.551493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.551682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.551710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.551882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.551907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.552024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.552049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.552218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.552246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.552424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.552449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.552571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.552597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.552770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.552799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.552963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.552989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.553116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.553141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.553263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.553287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.553437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.553462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.553627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.553656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.553846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.553874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.554052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.554077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.215 qpair failed and we were unable to recover it. 00:25:34.215 [2024-07-15 10:38:28.554198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.215 [2024-07-15 10:38:28.554242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.554409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.554437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.554597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.554622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.554803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.554847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.555015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.555043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.555168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.555194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.555346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.555390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.555530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.555558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.555757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.555782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.555927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.555957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.556097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.556126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.556294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.556319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.556480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.556509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.556670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.556697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.556831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.556856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.557013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.557041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.557210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.557244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.557438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.557464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.557624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.557677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.557845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.557874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.558049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.558075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.216 [2024-07-15 10:38:28.558247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.216 [2024-07-15 10:38:28.558306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.216 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.558498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.558527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.558663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.558689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.558856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.558893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.559026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.559056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.559225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.559251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.559369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.559410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.559570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.559598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.559775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.559804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.559990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.560018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.560213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.560241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.560413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.560439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.560589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.560616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.560812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.560840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.561015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.561041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.561210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.561241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.561406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.561435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.561579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.561604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.561728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.561754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.561956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.561984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.562153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.562178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.562335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.562394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.562561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.562588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.562731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.562756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.562899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.562942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.563081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.563108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.563276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.563301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.563456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.563500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.217 [2024-07-15 10:38:28.563676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.217 [2024-07-15 10:38:28.563703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.217 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.563868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.563899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.564015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.564040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.564180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.564208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.564383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.564408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.564577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.564605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.564794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.564822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.564990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.565019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.565168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.565214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.565364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.565392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.565562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.565587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.565758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.565785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.565957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.565983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.566124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.566149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.566368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.566419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.566578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.566606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.566773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.566798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.566968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.566997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.567155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.567182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.567379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.567404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.567632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.567685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.567848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.567881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.568050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.568075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.568238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.568268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.568457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.568484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.568628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.568652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.568804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.568829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.569000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.569025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.569141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.569166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.569287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.569328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.569513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.569541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.569683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.569710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.569901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.569929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.570095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.570121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.570301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.570326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.570488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.570538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.570674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.570703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.570871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.570904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.571074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.571101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.571288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.571316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.571459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.218 [2024-07-15 10:38:28.571484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.218 qpair failed and we were unable to recover it. 00:25:34.218 [2024-07-15 10:38:28.571599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.571625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.571803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.571831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.571981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.572006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.572181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.572209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.572384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.572409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.572532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.572556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.572677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.572706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.572819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.572844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.572973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.573000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.573174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.573202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.573339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.573367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.573521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.573546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.573692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.573717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.573868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.573908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.574104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.574129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.574251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.574276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.574430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.574455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.574577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.574602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.574751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.574796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.574931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.574959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.575126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.575152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.575319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.575347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.575519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.575543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.575684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.575709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.575869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.575903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.576031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.576061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.576232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.576257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.576434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.576486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.576640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.576667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.576821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.576846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.577034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.577059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.577200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.577225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.577369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.577394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.577540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.577568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.577728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.577756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.577888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.577914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.578072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.578097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.578249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.578292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.578461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.578486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.578647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.219 [2024-07-15 10:38:28.578675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.219 qpair failed and we were unable to recover it. 00:25:34.219 [2024-07-15 10:38:28.578865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.578901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.579074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.579099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.579268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.579296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.579456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.579483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.579627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.579652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.579800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.579841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.580005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.580034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.580162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.580187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.580361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.580385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.580559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.580587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.580729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.580755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.580919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.580960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.581121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.581148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.581367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.581392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.581519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.581545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.581718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.581743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.581868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.581905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.582024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.582067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.582236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.582264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.582435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.582461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.582643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.582670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.582828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.582856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.583006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.583031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.583179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.583204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.583385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.583412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.583552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.583577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.583729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.583754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.583902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.583928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.584085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.584110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.584221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.584263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.584426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.584453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.584628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.584652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.584793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.584818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.585008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.585037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.585176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.585201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.585375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.585426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.585598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.585625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.585769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.585795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.585912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.585938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.586087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.220 [2024-07-15 10:38:28.586115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.220 qpair failed and we were unable to recover it. 00:25:34.220 [2024-07-15 10:38:28.586251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.586276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.586419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.586444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.586601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.586628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.586787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.586812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.586976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.587004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.587164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.587192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.587361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.587386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.587584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.587612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.587750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.587778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.587954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.587979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.588127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.588152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.588296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.588324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.588495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.588519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.588642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.588684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.588845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.588872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.589045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.589070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.589265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.589293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.589432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.589460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.589597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.589622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.589767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.589806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.590044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.590074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.590267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.590292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.590433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.590461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.590625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.590650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.590772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.590797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.590941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.590984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.591112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.591139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.591285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.591310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.591484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.591524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.591666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.591691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.591844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.591869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.592013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.592038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.592196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.592223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.221 qpair failed and we were unable to recover it. 00:25:34.221 [2024-07-15 10:38:28.592385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.221 [2024-07-15 10:38:28.592414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.592643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.592693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.592900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.592929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.593072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.593097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.593220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.593245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.593416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.593441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.593565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.593590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.593755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.593784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.593919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.593947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.594114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.594141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.594267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.594292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.594418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.594444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.594630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.594655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.594805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.594830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.594961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.594987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.595106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.595131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.595278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.595322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.595490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.595515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.595631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.595656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.595767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.595792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.595957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.595983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.596159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.596184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.596372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.596421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.596587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.596615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.596792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.596817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.596989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.597015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.597179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.597209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.597388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.597413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.597667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.597721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.597925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.597951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.598100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.598126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.598295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.598323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.598511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.598538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.598686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.598711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.598856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.598917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.599085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.599113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.599281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.599305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.599428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.599469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.599611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.599638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.599803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.599828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.222 [2024-07-15 10:38:28.599993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.222 [2024-07-15 10:38:28.600026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.222 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.600155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.600183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.600380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.600405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.600572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.600599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.600739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.600767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.600940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.600966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.601110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.601134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.601295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.601323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.601463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.601488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.601634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.601658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.601833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.601861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.602057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.602082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.602252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.602279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.602442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.602471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.602649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.602675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.602868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.602901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.603056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.603084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.603271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.603296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.603421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.603447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.603598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.603623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.603842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.603867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.604042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.604070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.604227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.604254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.604419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.604444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.604596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.604621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.604762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.604787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.604934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.604960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.605081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.605108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.605248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.605276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.605420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.605445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.605589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.605614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.605765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.605806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.605952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.605978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.606135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.606178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.606344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.606372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.606535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.606560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.606723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.606752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.606886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.606914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.607086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.607111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.607300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.223 [2024-07-15 10:38:28.607328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.223 qpair failed and we were unable to recover it. 00:25:34.223 [2024-07-15 10:38:28.607488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.607522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.607667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.607694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.607887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.607916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.608073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.608101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.608263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.608288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.608426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.608454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.608647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.608674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.608838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.608864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.608994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.609019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.609208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.609236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.609431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.609456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.609624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.609652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.609788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.609816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.609963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.609989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.610184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.610212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.610368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.610396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.610590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.610615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.610754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.610782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.610965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.610990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.611112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.611138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.611282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.611324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.611488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.611515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.611645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.611670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.611822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.611848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.612025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.612053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.612212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.612237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.612381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.612425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.612561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.612588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.612734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.612761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.612909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.612935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.613087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.613112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.613262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.613287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.613429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.613457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.613586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.613614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.613813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.613838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.614025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.614052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.614195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.614219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.614394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.614420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.614586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.614614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.614782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.614810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.224 [2024-07-15 10:38:28.615007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.224 [2024-07-15 10:38:28.615036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.224 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.615226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.615253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.615444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.615471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.615641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.615666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.615829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.615857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.616011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.616036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.616180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.616205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.616320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.616363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.616521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.616548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.616692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.616718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.616892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.616918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.617057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.617082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.617227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.617251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.617400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.617425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.617622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.617649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.617795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.617820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.617972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.618014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.618176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.618204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.618349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.618374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.618524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.618550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.618764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.618789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.618933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.618958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.619101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.619126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.619338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.619363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.619509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.619533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.619724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.619752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.619938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.619967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.620108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.620133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.620249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.620275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.620455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.620483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.620659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.620685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.620851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.620890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.621052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.621081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.621234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.621258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.621448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.621475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.621607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.225 [2024-07-15 10:38:28.621636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.225 qpair failed and we were unable to recover it. 00:25:34.225 [2024-07-15 10:38:28.621803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.621829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.621997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.622025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.622155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.622182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.622342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.622367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.622499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.622527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.622679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.622722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.622889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.622916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.623072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.623098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.623266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.623291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.623404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.623429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.623546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.623572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.623717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.623746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.623910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.623936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.624086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.624111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.624262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.624287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.624404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.624429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.624616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.624644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.624772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.624800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.624946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.624972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.625119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.625161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.625321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.625349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.625518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.226 [2024-07-15 10:38:28.625542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.226 qpair failed and we were unable to recover it. 00:25:34.226 [2024-07-15 10:38:28.625665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.625706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.625869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.625903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.626040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.626065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.626256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.626284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.626477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.626505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.626684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.626710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.626871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.626906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.627066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.627094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.627264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.627289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.627415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.627459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.627625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.627653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.627797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.627823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.627944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.627969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.628089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.628114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.628255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.628279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.628397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.628422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.628595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.628620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.628763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.628792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.628943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.628968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.629114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.629139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.629303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.629329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.629515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.629544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.629709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.629741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.629885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.629911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.630062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.630104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.630237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.227 [2024-07-15 10:38:28.630265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.227 qpair failed and we were unable to recover it. 00:25:34.227 [2024-07-15 10:38:28.630404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.630430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.630548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.630573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.630717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.630745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.630912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.630938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.631099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.631126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.631272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.631297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.631445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.631470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.631596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.631640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.631795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.631823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.632030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.632055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.632219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.632247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.632405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.632432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.632604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.632631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.632746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.632788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.632980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.633008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.633154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.633180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.633333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.633377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.633541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.633568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.633767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.633792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.633944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.633970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.634167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.634195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.634365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.634390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.634542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.634570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.634712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.634740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.634928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.634953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.635089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.635116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.635293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.228 [2024-07-15 10:38:28.635318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.228 qpair failed and we were unable to recover it. 00:25:34.228 [2024-07-15 10:38:28.635467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.635491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.635652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.635679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.635813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.635841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.635985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.636011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.636163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.636188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.636313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.636340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.636547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.636573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.636715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.636743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.636913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.636943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.637073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.637103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.637257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.637282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.637435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.637478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.637650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.637675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.637836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.637863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.638034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.638061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.638228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.638252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.638376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.638401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.638578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.638606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.638776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.638801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.638949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.638975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.639119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.639147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.639316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.639341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.639507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.639536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.639696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.639724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.639933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.639959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.229 [2024-07-15 10:38:28.640109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.229 [2024-07-15 10:38:28.640134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.229 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.640284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.640309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.640481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.640506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.640672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.640699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.640904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.640932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.641100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.641125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.641274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.641300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.641467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.641494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.641635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.641660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.641808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.641833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.641994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.642020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.642194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.642220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.642386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.642413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.642551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.642580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.642750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.642775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.642943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.642972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.643100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.643128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.643296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.643322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.643487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.643515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.643683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.643711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.643853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.643883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.644069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.644094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.644239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.644268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.644458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.644483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.644615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.644648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.644787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.644814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.645017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.230 [2024-07-15 10:38:28.645042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.230 qpair failed and we were unable to recover it. 00:25:34.230 [2024-07-15 10:38:28.645207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.645235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.645405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.645430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.645550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.645576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.645730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.645771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.645905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.645934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.646070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.646097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.646246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.646289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.646480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.646507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.646676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.646700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.646848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.646873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.647032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.647057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.647187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.647212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.647362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.647387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.647530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.647555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.647675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.647700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.647896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.647924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.648088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.648116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.648314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.648340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.648503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.648531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.648738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.648763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.648942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.648968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.649109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.649138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.649278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.649307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.649477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.649503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.649652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.649681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.649805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.231 [2024-07-15 10:38:28.649833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.231 qpair failed and we were unable to recover it. 00:25:34.231 [2024-07-15 10:38:28.650003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.650029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.650178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.650221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.650389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.650417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.650587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.650612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.650777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.650805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.650965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.650993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.651159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.651186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.651325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.651350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.651532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.651559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.651694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.651720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.651896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.651923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.652041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.652070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.652221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.652246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.652393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.652435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.652593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.652621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.652815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.652840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.653014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.653040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.653164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.653219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.653378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.653403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.232 [2024-07-15 10:38:28.653569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.232 [2024-07-15 10:38:28.653596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.232 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.653784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.653812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.653953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.653979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.654094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.654120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.654300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.654328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.654459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.654484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.654631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.654656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.654836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.654863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.655033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.655058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.655220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.655248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.655381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.655410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.655573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.655598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.655715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.655755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.655938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.655967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.656110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.656135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.656284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.656309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.656475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.656502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.656681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.656707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.656900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.656929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.657122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.657150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.657325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.657350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.657495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.657520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.657716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.657743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.657888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.657913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.658069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.658094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.658246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.658270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.658442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.233 [2024-07-15 10:38:28.658467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.233 qpair failed and we were unable to recover it. 00:25:34.233 [2024-07-15 10:38:28.658636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.658664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.658868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.658898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.659069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.659094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.659264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.659292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.659433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.659460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.659633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.659662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.659809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.659836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.659990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.660016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.660164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.660190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.660310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.660353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.660494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.660524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.660693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.660720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.660855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.660906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.661096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.661124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.661267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.661292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.661464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.661493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.661665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.661694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.661862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.661903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.662051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.662080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.662255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.662284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.662458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.662483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.662603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.662651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.662813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.662842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.663006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.663032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.663189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.663234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.663369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.663399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.663565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.663590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.663741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.663766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.663930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.663960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.664108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.664135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.664270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.664313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.664477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.234 [2024-07-15 10:38:28.664505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.234 qpair failed and we were unable to recover it. 00:25:34.234 [2024-07-15 10:38:28.664671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.664697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.664835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.664898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.665054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.665082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.665240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.665267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.665463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.665492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.665621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.665649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.665823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.665849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.666015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.666041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.666192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.666217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.666350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.666374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.666548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.666578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.666715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.666745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.666917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.666943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.667073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.667126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.667271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.667299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.667472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.667499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.667621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.667668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.667873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.667917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.668059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.668084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.668203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.668244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.668389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.668427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.668611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.668654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.668816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.668844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.669037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.669063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.669186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.669212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.669355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.669398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.669541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.669570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.669721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.669747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.669906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.669956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.670098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.670126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.670297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.670322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.670450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.670498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.670643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.670672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.670848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.670885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.671041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.671078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.671253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.671281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.235 [2024-07-15 10:38:28.671465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.235 [2024-07-15 10:38:28.671494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.235 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.671635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.671671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.671812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.671839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.672008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.672033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.672166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.672212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.672404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.672430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.672606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.672638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.672780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.672808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.672957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.672987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.673137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.673166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.673290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.673329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.673497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.673523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.673678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.673702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.673870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.673917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.674056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.674084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.674289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.674314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.674425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.674450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.674582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.674613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.674754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.674780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.674949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.674978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.675116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.675153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.675311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.675337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.675472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.675498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.675682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.675711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.675886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.675911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.676061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.676087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.676271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.676297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.676417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.676443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.676566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.676614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.676780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.676808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.677021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.677053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.677203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.677231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.677395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.677424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.677591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.677617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.677779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.677808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.677960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.677989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.678190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.678216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.678358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.678387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.678574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.678602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.678751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.678787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.678931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.236 [2024-07-15 10:38:28.678965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.236 qpair failed and we were unable to recover it. 00:25:34.236 [2024-07-15 10:38:28.679123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.679148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.679334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.679359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.679493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.679519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.679641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.679670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.679837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.679862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.680057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.680092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.680276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.680305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.680451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.680477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.680604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.680629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.680776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.680802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.681007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.681044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.681185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.681212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.681353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.681380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.681557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.681584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.681708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.681734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.681890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.681916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.682059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.682091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.682247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.682275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.682417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.682446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.682620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.682645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.682776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.682802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.682929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.682962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.683123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.683147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.683315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.683354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.683520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.683548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.683718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.683744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.683893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.683928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.684108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.684145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.684336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.684361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.684487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.684512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.684643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.684678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.684909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.684936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.685086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.685115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.237 qpair failed and we were unable to recover it. 00:25:34.237 [2024-07-15 10:38:28.685251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.237 [2024-07-15 10:38:28.685288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.685499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.685525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.685695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.685724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.685848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.685882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.686061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.686097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.686294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.686324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.686496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.686525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.686694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.686724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.686860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.686922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.687068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.687092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.687228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.687258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.687454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.687483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.687676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.687705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.687846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.687871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.688013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.688062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.688243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.688279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.688450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.688476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.688651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.688686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.688871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.688909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.689076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.689101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.689222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.689263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.689445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.689487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.689613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.689637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.689773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.689798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.689988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.690015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.690203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.690229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.690354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.690388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.690540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.690566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.690737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.690774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.690960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.690989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.691147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.691186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.691400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.691426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.691555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.691580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.691768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.691792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.691925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.691951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.692083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.692110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.692258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.692284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.692437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.692471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.692602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.692627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.692777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.692803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.692945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.238 [2024-07-15 10:38:28.692971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.238 qpair failed and we were unable to recover it. 00:25:34.238 [2024-07-15 10:38:28.693097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.693145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.693335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.693363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.693543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.693569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.693726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.693762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.693959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.693986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.694139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.694164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.694293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.694328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.694486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.694512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.694669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.694694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.694843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.694886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.695070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.695096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.695213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.695238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.695403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.695428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.695583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.695618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.695777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.695803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.695959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.695986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.696120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.696147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.696279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.696312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.696465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.696490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.696618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.696647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.696772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.696799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.696956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.696981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.697128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.697153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.697319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.697347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.697472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.697498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.697616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.697641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.697763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.697788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.697939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.697969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.698100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.698126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.698271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.698296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.698409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.698437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.698577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.698602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.698724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.698750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.698874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.698906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.699057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.699082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.699199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.699225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.699352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.699378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.699558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.699588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.699707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.699732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.699854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.699895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.239 qpair failed and we were unable to recover it. 00:25:34.239 [2024-07-15 10:38:28.700026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.239 [2024-07-15 10:38:28.700052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.700178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.700202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.700317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.700342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.700496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.700522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.700664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.700689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.700835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.700861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.701009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.701035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.701191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.701216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.701339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.701364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.701502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.701534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.701682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.701713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.701889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.701916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.702057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.702084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.702245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.702271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.702384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.702409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.702533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.702566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.702745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.702771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.702892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.702926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.703056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.703081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.703199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.703230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.703366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.703391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.703530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.703555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.703682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.703706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.703859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.703892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.704041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.704076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.704198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.704223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.704340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.704367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.704508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.704534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.704654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.704680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.704827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.704852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.705031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.705057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.705186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.705212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.705344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.705369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.705494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.705529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.705685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.705710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.705860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.705897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.706017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.706047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.706189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.706213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.706343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.706368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.706539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.706572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.706730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.240 [2024-07-15 10:38:28.706756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.240 qpair failed and we were unable to recover it. 00:25:34.240 [2024-07-15 10:38:28.706874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.706909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.707055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.707084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.707239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.707264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.707422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.707447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.707566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.707591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.707716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.707741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.707865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.707928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.708083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.708108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.708253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.708294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.708419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.708444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.708585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.708612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.708748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.708776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.708930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.708957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.709155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.709195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.709372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.709401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.709579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.709606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.709756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.709783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.709915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.709942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.710068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca20e0 is same with the state(5) to be set 00:25:34.241 [2024-07-15 10:38:28.710247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.710286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.710437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.710464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.710587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.710628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.710762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.710796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.710959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.710986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.711111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.711136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.711275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.711304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.711448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.711473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.711593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.711617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.711781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.711809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.711957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.711983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.712121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.712146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.712320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.712366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.712515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.712540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.712702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.712730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.712890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.712919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.713086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.713111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.713271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.713300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.713477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.713502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.713658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.713683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.713806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.713831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.713985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.241 [2024-07-15 10:38:28.714012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.241 qpair failed and we were unable to recover it. 00:25:34.241 [2024-07-15 10:38:28.714173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.714199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.714335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.714361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.714488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.714513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.714633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.714658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.714810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.714836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.714984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.715009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.715123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.715148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.715268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.715293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.715415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.715445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.715596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.715621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.715746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.715771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.715932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.715958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.716080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.716106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.716259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.716284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.716393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.716418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.716556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.716581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.716732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.716758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.716891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.716918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.717051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.717077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.717229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.717256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.717436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.717461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.717598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.717623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.717756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.717781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.717954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.717980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.718137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.718162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.718335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.718363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.718482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.718511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.718662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.718687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.718857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.718916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.719059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.719084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.719240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.719265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.719432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.719459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.719615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.719660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.719838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.719863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.720000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.242 [2024-07-15 10:38:28.720025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.242 qpair failed and we were unable to recover it. 00:25:34.242 [2024-07-15 10:38:28.720169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.720204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.720403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.720428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.720578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.720603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.720729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.720754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.720888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.720914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.721043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.721068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.721189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.721214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.721343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.721368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.721497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.721522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.721642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.721667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.721778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.721803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.721928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.721954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.722072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.722098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.722255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.722280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.722458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.722490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.722620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.722651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.722822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.722847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.722969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.722994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.723114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.723140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.723254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.723279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.723428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.723453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.723612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.723637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.723759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.723784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.723906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.723932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.724079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.724104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.724219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.724243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.724366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.724391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.724506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.724531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.724682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.724706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.724848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.724873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.725022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.725047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.725159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.725183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.725306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.725332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.725484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.725509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.725635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.725660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.725814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.725839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.725969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.725995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.726116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.726140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.726279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.726304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.726452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.726477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.726593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.726619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.243 qpair failed and we were unable to recover it. 00:25:34.243 [2024-07-15 10:38:28.726745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.243 [2024-07-15 10:38:28.726774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.726919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.726945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.727062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.727087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.727202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.727227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.727345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.727370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.727514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.727539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.727663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.727688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.727806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.727833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.727953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.727979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.728127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.728153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.728273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.728298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.728465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.728490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.728663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.728688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.728802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.728827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.728961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.728987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.729104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.729129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.729249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.729274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.729401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.729427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.729574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.729598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.729715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.729740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.729857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.729888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.730048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.730073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.730223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.730248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.730398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.730423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.730534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.730559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.730709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.730736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.730890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.730915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.731065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.731090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.731222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.731247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.731373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.731397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.731532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.731575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.731747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.731773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.731909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.731935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.732060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.732085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.732221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.732249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.732445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.732470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.732605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.732632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.732788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.732816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.732959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.732984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.733106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.733131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.244 [2024-07-15 10:38:28.733295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.244 [2024-07-15 10:38:28.733323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.244 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.733466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.733497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.733615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.733640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.733765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.733790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.733921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.733947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.734096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.734120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.734263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.734290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.734426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.734451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.734594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.734619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.734765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.734793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.734970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.734996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.735114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.735139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.735269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.735297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.735442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.735467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.735607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.735648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.735823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.735851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.736011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.736036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.736164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.736188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.736314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.736339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.736457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.736482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.736603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.736629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.736802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.736830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.736976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.737001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.737128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.737153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.737328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.737355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.737487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.737512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.737657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.737682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.737838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.737867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.738044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.738073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.738192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.738233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.738392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.738420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.738562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.738587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.738733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.738773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.738919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.738948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.739092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.739117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.739248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.739273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.739406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.739430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.739578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.739602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.739717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.739759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.739914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.739943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.740094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.740120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.740243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.245 [2024-07-15 10:38:28.740267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.245 qpair failed and we were unable to recover it. 00:25:34.245 [2024-07-15 10:38:28.740502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.740545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.740777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.740808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.740977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.741005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.741139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.741181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.741358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.741384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.741514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.741539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.741706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.741731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.741854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.741884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.742007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.742048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.742225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.742253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.742438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.742463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.742583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.742608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.742758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.742783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.742907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.742933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.743063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.743089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.743241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.743269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.743416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.743441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.743555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.743580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.743757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.743785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.743938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.743965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.744109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.744134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.744276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.744303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.744466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.744491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.744662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.744690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.744851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.744884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.745026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.745051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.745166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.745191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.745335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.745367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.745528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.745553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.745667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.745692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.745849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.745881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.746031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.746056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.746169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.746194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.746363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.746390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.746593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.746618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.246 qpair failed and we were unable to recover it. 00:25:34.246 [2024-07-15 10:38:28.746760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.246 [2024-07-15 10:38:28.746788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.746938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.746964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.747084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.747111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.747242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.747266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.747382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.747407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.747533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.747558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.747682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.747707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.747818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.747844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.747977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.748003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.748121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.748145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.748258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.748282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.748429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.748454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.748594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.748622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.748746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.748774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.748946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.748972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.749086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.749111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.749254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.749281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.749477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.749502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.749639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.749668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.749833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.749866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.750062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.750088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.750263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.750290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.750441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.750469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.750612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.750636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.750787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.750812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.750973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.750998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.751112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.751137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.751282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.751324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.751491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.751519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.751681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.751706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.751869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.751902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.752034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.752062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.752248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.752273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.752417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.752444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.752631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.752658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.752830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.752855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.753004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.753032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.753168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.753196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.753374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.753399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.753565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.753592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.753752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.753780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.247 [2024-07-15 10:38:28.753958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.247 [2024-07-15 10:38:28.753983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.247 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.754099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.754143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.754266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.754294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.754462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.754486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.754645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.754673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.754807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.754834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.755002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.755028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.755197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.755225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.755352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.755379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.755571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.755595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.755732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.755760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.755906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.755951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.756074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.756099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.756246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.756288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.756476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.756504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.756694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.756719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.756916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.756945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.757120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.757145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.757290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.757315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.757489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.757522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.757701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.757726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.757881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.757907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.758023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.758066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.758206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.758233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.758369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.758395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.758586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.758614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.758742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.758770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.758925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.758951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.759078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.759103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.759273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.759301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.759435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.759460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.759573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.759598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.759736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.759763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.759945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.759971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.760086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.760128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.760286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.760314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.760505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.760530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.760721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.760748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.760890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.760919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.761089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.761114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.761315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.761343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.248 [2024-07-15 10:38:28.761478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.248 [2024-07-15 10:38:28.761506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.248 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.761667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.761692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.761856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.761889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.762081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.762109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.762252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.762277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.762419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.762448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.762624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.762652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.762886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.762914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.763079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.763104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.763254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.763282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.763475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.763500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.763639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.763667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.763865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.763895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.764053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.764078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.764228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.764270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.764407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.764435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.764576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.764601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.764723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.764748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.764928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.764957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.765103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.765128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.765279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.765321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.765490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.765517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.765711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.765736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.765904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.765938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.766102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.766130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.766332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.766357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.766485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.766510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.766633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.766658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.766777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.766802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.766926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.766952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.767126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.767150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.767314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.767339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.767490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.767514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.767698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.767726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.767896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.767922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.768042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.768083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.768256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.768284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.768461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.768486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.768601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.768642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.768763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.768790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.768933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.768958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.249 qpair failed and we were unable to recover it. 00:25:34.249 [2024-07-15 10:38:28.769111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.249 [2024-07-15 10:38:28.769135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.769360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.769385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.769535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.769560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.769724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.769752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.769937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.769965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.770164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.770192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.770324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.770352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.770518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.770546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.770678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.770703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.770848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.770872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.771062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.771090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.771279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.771303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.771434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.771462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.771615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.771643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.771807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.771831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.771962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.772007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.772137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.772165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.772304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.772329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.772454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.772478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.772682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.772710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.772872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.772904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.773062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.773090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.773225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.773253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.773445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.773469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.773637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.773665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.773795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.773823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.773960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.773985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.774113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.774138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.774346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.774374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.774523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.774548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.774663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.774688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.774829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.774857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.775027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.775056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.775211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.775239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.250 [2024-07-15 10:38:28.775369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.250 [2024-07-15 10:38:28.775396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.250 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.775565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.775590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.775759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.775788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.775954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.775982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.776126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.776151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.776275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.776300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.776472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.776499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.776660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.776685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.776844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.776871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.777003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.777031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.777225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.777250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.777442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.777470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.777661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.777689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.777820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.777845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.777965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.777990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.778162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.778190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.778360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.778385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.778577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.778605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.778801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.778828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.778999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.779024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.779169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.779210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.779369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.779397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.779587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.779612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.779781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.779808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.779951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.779980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.780139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.780164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.780289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.780314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.780429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.780454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.780579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.780604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.780768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.780795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.781006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.781032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.781146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.781171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.781326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.781369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.781531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.781559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.781730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.781754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.781942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.781971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.782110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.782138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.782289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.782314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.782463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.782507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.782677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.782705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.782855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.782884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.251 [2024-07-15 10:38:28.783029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.251 [2024-07-15 10:38:28.783054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.251 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.783200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.783242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.783407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.783432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.783601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.783629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.783796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.783825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.783982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.784008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.784155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.784196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.784365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.784392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.784560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.784585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.784779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.784807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.784950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.784989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.785153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.785178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.785314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.785357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.785491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.785518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.785654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.785679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.785796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.785821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.785999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.786025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.786169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.786194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.786386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.786413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.786585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.786613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.786747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.786772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.786926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.786951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.787099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.787124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.787284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.787309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.787448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.787489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.787652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.787680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.787885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.787911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.788040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.788068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.788231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.788260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.788422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.788447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.788597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.788622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.788775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.788800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.788971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.788996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.789108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.789133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.789283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.789308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.789460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.789485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.789647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.789675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.789815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.789842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.790037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.790062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.790224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.790252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.790430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.252 [2024-07-15 10:38:28.790454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.252 qpair failed and we were unable to recover it. 00:25:34.252 [2024-07-15 10:38:28.790602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.790627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.790784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.790811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.790977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.791005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.791147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.791172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.791335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.791363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.791495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.791523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.791659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.791683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.791832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.791872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.792062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.792087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.792238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.792263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.792398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.792425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.792579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.792607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.792779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.792804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.792969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.792995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.793164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.793192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.793367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.793392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.793578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.793606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.793767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.793795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.793922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.793948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.794095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.794120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.794295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.794320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.794466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.794490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.794653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.794680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.794840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.794870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.795011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.795036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.795199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.795232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.795397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.795422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.795567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.795591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.795710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.795752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.795945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.795971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.796119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.796144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.796314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.796339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.796467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.796491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.796656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.796681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.796852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.796901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.797074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.797102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.797277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.797301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.797465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.797492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.797622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.797651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.797827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.797853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.797987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.798013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.798182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.798209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.798406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.798432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.798626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.253 [2024-07-15 10:38:28.798653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.253 qpair failed and we were unable to recover it. 00:25:34.253 [2024-07-15 10:38:28.798791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.798819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.798993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.799019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.799146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.799171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.799349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.799376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.799542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.799567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.799729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.799756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.799960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.799985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.800111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.800136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.800285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.800309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.800490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.800518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.800663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.800688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.800818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.800842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.801018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.801043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.801187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.801212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.801415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.801443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.801576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.801603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.801743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.801768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.801893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.801919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.802101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.802128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.802294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.802319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.802490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.802517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.802672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.802699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.802901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.802927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.803072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.803097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.803256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.803283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.803445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.803470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.803594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.803619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.803775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.803799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.803963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.803989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.804158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.804183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.804304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.804330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.804483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.804508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.804634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.804658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.804881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.804907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.805022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.805049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.805196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.805238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.805412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.805439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.805600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.254 [2024-07-15 10:38:28.805624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.254 qpair failed and we were unable to recover it. 00:25:34.254 [2024-07-15 10:38:28.805794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.805822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.805993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.806018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.806190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.806215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.806336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.806361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.806508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.806534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.806681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.806705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.806824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.806866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.807008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.807036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.807229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.807253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.807398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.807426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.807584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.807611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.807806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.807835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.808036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.808065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.808227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.808254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.808398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.808422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.808534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.808559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.808726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.808754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.808898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.808924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.809045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.809069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.809271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.809299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.809444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.809468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.809622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.255 [2024-07-15 10:38:28.809664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.255 qpair failed and we were unable to recover it. 00:25:34.255 [2024-07-15 10:38:28.809824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.809851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.809999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.810025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.810152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.810177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.810309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.810334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.810492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.810516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.810663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.810688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.810888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.810916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.811081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.811106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.811303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.811331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.811457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.811485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.811630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.811655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.811773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.811798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.811935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.811961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.812076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.812101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.812252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.812294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.812427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.812455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.812621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.812646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.812792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.812820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.812985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.813011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.813185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.813210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.813376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.813404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.813538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.813565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.813730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.813755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.813880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.813906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.814082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.814109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.814306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.814331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.814465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.814492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.814624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.814652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.814792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.814817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.814986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.256 [2024-07-15 10:38:28.815012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.256 qpair failed and we were unable to recover it. 00:25:34.256 [2024-07-15 10:38:28.815174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.815206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.815344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.815369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.815572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.815599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.815723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.815751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.815885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.815911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.816064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.816089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.816240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.816265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.816381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.816406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.816598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.816625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.816783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.816810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.816959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.816984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.817158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.817200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.817368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.817396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.817588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.817613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.817761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.817789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.817963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.817991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.818134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.818159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.818341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.818366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.818509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.818536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.818707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.818731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.818906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.818934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.819091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.819118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.819264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.819289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.819455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.819483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.819666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.819694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.819856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.819950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.820152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.820180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.820340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.820372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.820540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.820565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.820711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.820753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.820910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.820939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.257 qpair failed and we were unable to recover it. 00:25:34.257 [2024-07-15 10:38:28.821106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.257 [2024-07-15 10:38:28.821130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.821292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.821319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.821474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.821502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.821634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.821658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.821801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.821825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.822009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.822034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.822190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.822215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.822353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.822380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.822542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.822570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.822713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.822738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.822866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.822918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.823080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.823108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.823254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.823279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.823410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.823435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.823576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.823600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.823722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.823747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.823894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.823920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.824125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.824153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.824318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.824343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.824508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.824535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.824735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.824761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.824909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.824935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.825109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.825137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.825277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.825305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.825455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.825480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.825589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.825614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.825782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.825809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.825971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.825997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.826159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.826187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.258 [2024-07-15 10:38:28.826320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-07-15 10:38:28.826349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.258 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.826495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.826520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.826691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.826717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.826853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.826885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.827034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.827060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.827209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.827234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.827405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.827433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.827599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.827624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.827745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.827791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.827951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.827979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.828121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.828146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.828322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.828361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.828491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.828519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.828654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.828696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.828833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.828861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.829057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.829083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.829232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.829257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.829427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.829455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.829612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.829640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.829771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.829796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.829968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.830010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.830146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.830173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.830352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.830378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.830547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.830574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.830736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.830764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.830908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.830943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.831101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.831125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.831329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.831356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.831519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.831544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.831696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.831721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.831862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.831892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.259 [2024-07-15 10:38:28.832018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-07-15 10:38:28.832042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.259 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.832217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.832244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.832370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.832397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.832524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.832548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.832664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.832699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.832886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.832915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.833071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.833096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.833267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.833292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.833441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.833469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.833664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.833689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.833854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.833888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.834071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.834099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.834256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.834281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.834403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.834445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.260 [2024-07-15 10:38:28.834591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-07-15 10:38:28.834618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.260 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.834763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.834788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.834924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.834950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.835129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.835154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.835358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.835384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.835534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.835559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.835675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.835701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.835873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.835910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.836076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.836104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.836245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.836273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.836438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.836462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.836582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.836606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.836806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.836834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.837009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.837035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.837198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.837226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.837357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.837385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.837547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.837571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.837716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.837758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.837947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.837972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.838093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.838118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.838262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.838304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.838475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.838499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.838649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.838674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.838865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.838901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.839021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.839049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.839238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.839262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.839392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.551 [2024-07-15 10:38:28.839417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.551 qpair failed and we were unable to recover it. 00:25:34.551 [2024-07-15 10:38:28.839566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.839591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.839748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.839772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.839920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.839961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.840096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.840124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.840262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.840291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.840451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.840493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.840661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.840689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.840886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.840912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.841052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.841080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.841241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.841270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.841435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.841460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.841627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.841654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.841842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.841870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.842072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.842097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.842261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.842288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.842440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.842468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.842639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.842663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.842833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.842881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.843049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.843077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.843222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.843246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.843412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.843440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.843601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.843628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.843827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.843852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.844008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.844036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.844206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.844234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.844390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.844414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.844575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.844602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.844727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.844755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.844913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.844938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.845082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.845107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.845304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.845329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.845452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.845480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.845670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.845697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.845854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.845887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.846053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.846079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.846274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.846302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.846465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.846492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.846655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.846680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.846801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.846826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.847043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.847068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.847215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.552 [2024-07-15 10:38:28.847240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.552 qpair failed and we were unable to recover it. 00:25:34.552 [2024-07-15 10:38:28.847412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.847440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.847625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.847652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.847794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.847819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.847975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.848019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.848185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.848213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.848412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.848437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.848572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.848601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.848733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.848760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.848896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.848922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.849048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.849073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.849247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.849276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.849452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.849477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.849599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.849640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.849834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.849859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.850014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.850040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.850161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.850186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.850322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.850350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.850520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.850545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.850663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.850705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.850861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.850895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.851063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.851088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.851213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.851238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.851405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.851432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.851577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.851602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.851747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.851772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.851940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.851968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.852141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.852166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.852279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.852304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.852479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.852504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.852646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.852671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.852792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.852834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.853039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.853068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.853242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.853267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.853468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.853496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.853650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.853677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.853843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.853868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.854045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.854073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.854273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.854297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.854446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.854471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.854630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.854658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.854821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.854849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.553 [2024-07-15 10:38:28.854985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.553 [2024-07-15 10:38:28.855010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.553 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.855141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.855166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.855337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.855364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.855536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.855561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.855681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.855722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.855873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.855920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.856089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.856114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.856238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.856281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.856440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.856468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.856604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.856630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.856816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.856844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.856993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.857018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.857183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.857207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.857377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.857405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.857564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.857591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.857754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.857779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.857903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.857929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.858060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.858085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.858235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.858260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.858428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.858456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.858585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.858614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.858774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.858799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.858978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.859003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.859155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.859198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.859371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.859396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.859570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.859595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.859791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.859819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.859960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.859986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.860175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.860203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.860326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.860353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.860530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.860555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.860724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.860752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.860916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.860945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.861107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.861132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.861249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.861289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.861451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.861479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.861621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.861645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.861791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.861833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.861986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.862012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.862137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.862161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.862312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.862337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.862510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.554 [2024-07-15 10:38:28.862538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.554 qpair failed and we were unable to recover it. 00:25:34.554 [2024-07-15 10:38:28.862678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.862702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.862875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.862905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.863053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.863078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.863207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.863231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.863395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.863422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.863547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.863575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.863737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.863762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.863896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.863922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.864067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.864092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.864265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.864290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.864428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.864455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.864594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.864621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.864825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.864850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.865033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.865061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.865252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.865279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.865433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.865459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.865632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.865678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.865840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.865867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.866042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.866067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.866227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.866255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.866438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.866466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.866632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.866656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.866862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.866898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.867030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.867058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.867202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.867226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.867391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.867418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.867546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.867574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.867759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.867786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.867912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.867940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.868072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.868096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.868242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.868266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.868382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.868423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.868552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.868578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.555 [2024-07-15 10:38:28.868735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.555 [2024-07-15 10:38:28.868759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.555 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.868902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.868944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.869137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.869164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.869307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.869331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.869486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.869510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.869649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.869675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.869829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.869852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.869988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.870014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.870163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.870190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.870329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.870354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.870482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.870507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.870657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.870685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.870854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.870884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.871031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.871059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.871235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.871260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.871402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.871427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.871542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.871582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.871756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.871783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.871907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.871932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.872081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.872106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.872258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.872286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.872431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.872456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.872643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.872671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.872794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.872821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.872966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.872995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.873146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.873187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.873326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.873354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.873518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.873543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.873670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.873695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.873811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.873836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.873998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.874023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.874147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.874172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.874292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.874317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.874431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.874456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.874598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.874639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.874783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.874811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.874980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.875006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.875130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.875155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.875287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.875312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.875491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.875516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.875660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.875689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.875851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.556 [2024-07-15 10:38:28.875886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.556 qpair failed and we were unable to recover it. 00:25:34.556 [2024-07-15 10:38:28.876023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.876048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.876170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.876195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.876347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.876375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.876524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.876549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.876720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.876745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.876903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.876932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.877105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.877130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.877286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.877311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.877482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.877509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.877685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.877714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.877866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.877898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.878084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.878112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.878262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.878287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.878407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.878431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.878618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.878643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.878768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.878793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.878914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.878956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.879117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.879158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.879340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.879365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.879525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.879552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.879685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.879713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.879880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.879906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.880034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.880059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.880189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.880214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.880369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.880394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.880537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.880579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.880743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.880771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.880946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.880973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.881143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.881171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.881308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.881336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.881507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.881532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.881656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.881698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.881882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.881910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.882074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.882099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.882256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.882284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.882451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.882478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.882651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.882675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.882847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.882883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.883028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.883053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.883192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.883217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.883336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.883380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.557 qpair failed and we were unable to recover it. 00:25:34.557 [2024-07-15 10:38:28.883519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.557 [2024-07-15 10:38:28.883546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.883714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.883739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.883862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.883893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.884057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.884084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.884248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.884273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.884443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.884471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.884595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.884622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.884785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.884810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.884988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.885015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.885208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.885240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.885387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.885412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.885537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.885562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.885710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.885738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.885892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.885918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.886065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.886108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.886267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.886295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.886444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.886469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.886618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.886661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.886825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.886852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.887008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.887034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.887162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.887187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.887306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.887331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.887479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.887504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.887657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.887682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.887854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.887888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.888067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.888092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.888209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.888234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.888368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.888396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.888541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.888566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.888683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.888708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.888849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.888885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.889046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.889070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.889220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.889246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.889388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.889413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.889615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.889639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.889765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.889790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.889937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.889968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.890092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.890117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.890249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.890292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.890426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.890454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.890645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.558 [2024-07-15 10:38:28.890670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.558 qpair failed and we were unable to recover it. 00:25:34.558 [2024-07-15 10:38:28.890804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.890832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.891018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.891044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.891161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.891186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.891309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.891334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.891466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.891491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.891605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.891629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.891799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.891827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.891994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.892022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.892164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.892190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.892366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.892394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.892516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.892544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.892697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.892721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.892872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.892934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.893106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.893131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.893317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.893342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.893458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.893500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.893638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.893665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.893813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.893838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.893969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.894010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.894178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.894206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.894401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.894426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.894592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.894620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.894810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.894838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.894993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.895018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.895162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.895204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.895365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.895392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.895563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.895588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.895711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.895736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.895917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.895945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.896088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.896114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.896237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.896262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.896375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.896399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.896532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.896557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.896719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.896747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.896882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.896910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.559 [2024-07-15 10:38:28.897085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.559 [2024-07-15 10:38:28.897110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.559 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.897239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.897268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.897394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.897419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.897539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.897564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.897678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.897703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.897881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.897910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.898059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.898084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.898209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.898234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.898383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.898410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.898557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.898582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.898732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.898756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.898941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.898967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.899091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.899116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.899261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.899302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.899428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.899456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.899605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.899630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.899771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.899812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.899953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.899982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.900154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.900179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.900314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.900342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.900501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.900529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.900678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.900702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.900851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.900895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.901061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.901089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.901233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.901258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.901406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.901431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.901613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.901638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.901768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.901792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.901940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.901970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.902098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.902122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.902265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.902290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.902452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.902480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.902607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.902634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.902780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.902805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.902960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.902986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.903140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.903168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.903360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.903386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.903569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.903596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.903746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.903774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.903925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.903951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.560 [2024-07-15 10:38:28.904071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.560 [2024-07-15 10:38:28.904096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.560 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.904239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.904267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.904464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.904489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.904659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.904687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.904852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.904887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.905026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.905052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.905169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.905210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.905373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.905400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.905551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.905576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.905718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.905760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.905938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.905964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.906089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.906113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.906244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.906269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.906445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.906472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.906639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.906664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.906835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.906863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.907024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.907052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.907194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.907218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.907412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.907439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.907568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.907595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.907741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.907766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.907896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.907922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.908039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.908064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.908216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.908241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.908366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.908391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.908518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.908543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.908660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.908685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.908835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.908860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.909031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.909059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.909252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.909281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.909497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.909522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.909665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.909690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.909838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.909863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.909982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.910007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.910160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.910185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.910332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.910357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.910495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.910537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.910704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.910729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.910851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.910881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.911033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.911057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.911183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.911207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.911331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.561 [2024-07-15 10:38:28.911356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.561 qpair failed and we were unable to recover it. 00:25:34.561 [2024-07-15 10:38:28.911475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.911500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.911636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.911660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.911832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.911859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.912005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.912030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.912145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.912169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.912287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.912312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.912458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.912501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.912632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.912659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.912864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.912897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.913033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.913060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.913222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.913250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.913400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.913425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.913567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.913591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.913759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.913799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.913946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.913972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.914149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.914177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.914312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.914340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.914488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.914512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.914629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.914654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.914797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.914824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.914958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.914984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.915111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.915136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.915307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.915334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.915470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.915495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.915643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.915685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.915809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.915837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.916011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.916036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.916157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.916182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.916369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.916397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.916535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.916560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.916689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.916714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.916863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.916911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.917051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.917076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.917198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.917223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.917369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.917396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.917565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.917590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.917716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.917741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.917896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.917925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.918082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.918107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.918278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.918303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.918456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.918485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.562 [2024-07-15 10:38:28.918647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.562 [2024-07-15 10:38:28.918672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.562 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.918849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.918882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.919056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.919081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.919227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.919252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.919430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.919457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.919584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.919611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.919759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.919784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.919934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.919959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.920124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.920149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.920337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.920361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.920528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.920555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.920690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.920719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.920890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.920916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.921032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.921074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.921213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.921245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.921395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.921420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.921570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.921595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.921753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.921780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.921980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.922006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.922176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.922204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.922369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.922396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.922592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.922617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.922759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.922786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.922947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.922975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.923122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.923147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.923253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.923278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.923423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.923451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.923583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.923609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.923766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.923809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.923982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.924008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.924179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.924204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.924342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.924369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.924508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.924536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.924679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.924703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.924831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.924856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.925041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.925069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.925216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.925241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.925356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.925381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.925525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.925550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.925731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.925756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.925870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.925934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.926067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.563 [2024-07-15 10:38:28.926095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.563 qpair failed and we were unable to recover it. 00:25:34.563 [2024-07-15 10:38:28.926267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.926292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.926413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.926438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.926591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.926619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.926783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.926808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.926929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.926971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.927107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.927135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.927281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.927306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.927492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.927520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.927647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.927675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.927815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.927839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.927986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.928012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.928133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.928174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.928320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.928345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.928495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.928539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.928676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.928704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.928837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.928863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.929012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.929038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.929183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.929210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.929376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.929401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.929518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.929542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.929723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.929747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.929926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.929951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.930095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.930123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.930262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.930290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.930440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.930465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.930582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.930606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.930775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.930803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.930950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.930976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.931102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.931126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.931278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.931306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.931511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.931536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.931671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.931699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.931863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.931899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.932035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.932059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.932228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.932255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.564 qpair failed and we were unable to recover it. 00:25:34.564 [2024-07-15 10:38:28.932396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.564 [2024-07-15 10:38:28.932424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.932591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.932616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.932781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.932808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.932970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.932999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.933138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.933163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.933316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.933345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.933485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.933510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.933625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.933650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.933781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.933806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.933987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.934012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.934160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.934185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.934305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.934348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.934510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.934538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.934713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.934738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.934858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.934907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.935100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.935128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.935269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.935294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.935413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.935438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.935578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.935603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.935747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.935772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.935915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.935940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.936070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.936095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.936217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.936243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.936432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.936460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.936648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.936675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.936845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.936870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.937026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.937051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.937217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.937245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.937408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.937432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.937624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.937651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.937810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.937838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.938006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.938031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.938139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.938164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.938347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.938374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.938542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.938566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.938758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.938786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.938948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.938973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.939096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.939122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.939270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.939313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.939502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.939530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.939664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.939688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.565 [2024-07-15 10:38:28.939833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.565 [2024-07-15 10:38:28.939883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.565 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.940026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.940055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.940221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.940247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.940412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.940439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.940582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.940609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.940780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.940809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.940955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.940984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.941115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.941143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.941277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.941301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.941450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.941492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.941652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.941679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.941842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.941867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.942040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.942067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.942221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.942248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.942397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.942422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.942539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.942563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.942735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.942762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.942910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.942936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.943126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.943153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.943326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.943353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.943490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.943515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.943663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.943688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.943901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.943926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.944042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.944067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.944211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.944236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.944423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.944448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.944593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.944618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.944775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.944803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.944948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.944976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.945148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.945172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.945368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.945395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.945550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.945577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.945739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.945768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.945920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.945945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.946145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.946173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.946343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.946369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.946485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.946530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.946717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.946745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.946881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.946906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.947024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.947049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.947191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.947219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.947389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.947413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.566 qpair failed and we were unable to recover it. 00:25:34.566 [2024-07-15 10:38:28.947602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.566 [2024-07-15 10:38:28.947630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.947782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.947810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.947943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.947968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.948116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.948140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.948339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.948367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.948533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.948558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.948707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.948749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.948915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.948943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.949078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.949103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.949224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.949248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.949418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.949445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.949587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.949612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.949809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.949836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.949999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.950029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.950200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.950225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.950424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.950451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.950616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.950644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.950835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.950860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.951031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.951059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.951251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.951279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.951427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.951452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.951577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.951601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.951754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.951781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.951959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.951984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.952125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.952150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.952363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.952388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.952555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.952580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.952776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.952804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.952969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.952998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.953149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.953175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.953300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.953325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.953527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.953561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.953731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.953755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.953896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.953940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.954097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.954125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.954298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.954323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.954468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.954493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.954627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.954655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.954828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.954853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.955002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.955045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.955207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.567 [2024-07-15 10:38:28.955234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.567 qpair failed and we were unable to recover it. 00:25:34.567 [2024-07-15 10:38:28.955372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.955396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.955548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.955573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.955741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.955769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.955932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.955958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.956113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.956138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.956276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.956304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.956467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.956492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.956655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.956684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.956816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.956844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.957036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.957062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.957226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.957253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.957417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.957444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.957637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.957662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.957785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.957810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.957938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.957964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.958090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.958117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.958280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.958308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.958441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.958473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.958640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.958665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.958787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.958829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.959028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.959054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.959205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.959230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.959371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.959396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.959562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.959590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.959749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.959777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.959948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.959974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.960117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.960142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.960297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.960321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.960465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.960506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.960642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.960669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.960847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.960871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.961051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.961079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.961219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.961246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.961382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.961406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.961520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.568 [2024-07-15 10:38:28.961544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.568 qpair failed and we were unable to recover it. 00:25:34.568 [2024-07-15 10:38:28.961711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.961738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.961942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.961968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.962125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.962153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.962308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.962335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.962473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.962498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.962620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.962645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.962768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.962793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.962938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.962964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.963129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.963156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.963283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.963312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.963488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.963513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.963672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.963699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.963854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.963887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.964075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.964100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.964259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.964287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.964413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.964441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.964579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.964604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.964736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.964761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.964907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.964932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.965083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.965108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.965253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.965277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.965393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.965418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.965587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.965611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.965775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.965806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.965966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.965994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.966164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.966189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.966309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.966333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.966478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.966503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.966675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.966700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.966864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.966898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.967067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.967092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.967273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.967318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.967481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.967506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.967621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.967646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.967814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.967839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.967969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.967994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.968140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.968181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.968378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.968403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.968567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.968595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.968735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.968762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.968895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.968920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.569 [2024-07-15 10:38:28.969109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.569 [2024-07-15 10:38:28.969137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.569 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.969303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.969331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.969523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.969548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.969744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.969772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.969929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.969957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.970126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.970151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.970274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.970300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.970445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.970473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.970631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.970656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.970851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.970889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.971025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.971053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.971217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.971242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.971405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.971433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.971629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.971654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.971797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.971822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.971977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.972003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.972152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.972177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.972391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.972415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.972579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.972607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.972740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.972767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.972904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.972930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.973066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.973107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.973240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.973268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.973466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.973490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.973648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.973675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.973833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.973861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.974050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.974075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.974222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.974249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.974408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.974435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.974585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.974610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.974782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.974807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.974989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.975015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.975124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.975149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.975290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.975333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.975467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.975494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.975658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.975682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.975848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.975881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.976036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.976061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.976233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.976258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.976415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.976442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.976601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.570 [2024-07-15 10:38:28.976628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.570 qpair failed and we were unable to recover it. 00:25:34.570 [2024-07-15 10:38:28.976819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.976844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.977001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.977030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.977168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.977196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.977366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.977391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.977546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.977573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.977736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.977763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.977937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.977963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.978108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.978132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.978297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.978325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.978491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.978520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.978679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.978707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.978872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.978907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.979077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.979103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.979296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.979323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.979481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.979508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.979680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.979705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.979881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.979924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.980091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.980119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.980285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.980310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.980456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.980481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.980629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.980654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.980798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.980823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.980938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.980981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.981164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.981189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.981360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.981385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.981549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.981576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.981740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.981768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.981906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.981932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.982072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.982097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.982244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.982269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.982462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.982488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.982653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.982680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.982884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.982910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.983060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.983085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.983233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.983258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.983387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.983415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.983580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.983605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.983729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.983754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.983894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.983920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.984044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.984070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.984214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.984257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.571 [2024-07-15 10:38:28.984445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.571 [2024-07-15 10:38:28.984473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.571 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.984607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.984632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.984778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.984818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.984995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.985021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.985131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.985156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.985301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.985344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.985476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.985504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.985639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.985664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.985815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.985840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.986022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.986048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.986168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.986193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.986321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.986346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.986540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.986568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.986706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.986732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.986887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.986929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.987090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.987118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.987285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.987309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.987428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.987469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.987662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.987689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.987840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.987864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.988017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.988042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.988211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.988240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.988379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.988404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.988532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.988557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.988726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.988753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.988926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.988951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.989149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.989177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.989338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.989366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.989511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.989536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.989687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.989712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.989905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.989933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.990130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.990155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.990284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.990314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.990479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.990506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.990668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.990693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.990858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.572 [2024-07-15 10:38:28.990891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.572 qpair failed and we were unable to recover it. 00:25:34.572 [2024-07-15 10:38:28.991056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.991087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.991236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.991261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.991450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.991477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.991616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.991645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.991813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.991838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.991958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.991983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.992155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.992197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.992335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.992360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.992517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.992560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.992679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.992706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.992904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.992931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.993082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.993110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.993299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.993326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.993520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.993545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.993734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.993762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.993938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.993964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.994137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.994162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.994336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.994364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.994529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.994558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.994756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.994781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.994928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.994957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.995144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.995169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.995317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.995342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.995457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.995499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.995685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.995713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.995855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.995887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.996035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.996060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.996251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.996279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.996479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.996504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.996671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.996698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.996874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.996913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.997062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.997087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.997256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.997283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.997443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.997471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.997638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.997663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.997856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.997891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.998035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.998063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.998205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.998229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.998371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.998412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.998572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.998600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.998808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.573 [2024-07-15 10:38:28.998836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.573 qpair failed and we were unable to recover it. 00:25:34.573 [2024-07-15 10:38:28.999017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:28.999046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:28.999230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:28.999258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:28.999431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:28.999456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:28.999577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:28.999617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:28.999806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:28.999831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:28.999974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.000000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.000150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.000175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.000326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.000351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.000498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.000522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.000680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.000708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.000894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.000920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.001071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.001098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.001288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.001315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.001448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.001476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.001656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.001682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.001855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.001886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.002032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.002060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.002230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.002255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.002418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.002446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.002606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.002635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.002807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.002832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.002953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.002996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.003183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.003211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.003373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.003398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.003561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.003589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.003751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.003779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.003954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.003980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.004130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.004177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.004336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.004364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.004529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.004554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.004716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.004744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.004917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.004942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.005083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.005108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.005270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.005298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.005459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.005486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.005657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.005682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.005806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.005848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.006046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.006071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.006244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.006269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.006429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.006457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.006623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.006651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.574 [2024-07-15 10:38:29.006819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.574 [2024-07-15 10:38:29.006847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.574 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.007023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.007048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.007188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.007216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.007380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.007405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.007579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.007606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.007746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.007773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.007915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.007941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.008087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.008112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.008292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.008320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.008516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.008541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.008700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.008728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.008915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.008945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.009076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.009101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.009226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.009251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.009397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.009425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.009618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.009643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.009806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.009834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.010011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.010040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.010204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.010229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.010351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.010392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.010572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.010597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.010774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.010799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.010965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.010994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.011158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.011185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.011333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.011358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.011480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.011505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.011682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.011710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.011862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.011898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.012025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.012050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.012221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.012249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.012412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.012437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.012549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.012594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.012750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.012778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.012978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.013004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.013149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.013190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.013376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.013403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.013563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.013588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.013733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.013775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.013916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.013945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.014087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.014112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.014233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.014258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.014389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.575 [2024-07-15 10:38:29.014414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.575 qpair failed and we were unable to recover it. 00:25:34.575 [2024-07-15 10:38:29.014559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.014584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.014758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.014783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.014967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.014993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.015138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.015163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.015319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.015346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.015491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.015516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.015662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.015688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.015856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.015889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.016057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.016085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.016282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.016307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.016466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.016494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.016655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.016682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.016847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.016882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.017034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.017059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.017197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.017225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.017394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.017418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.017548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.017572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.017720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.017745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.017951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.017977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.018136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.018163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.018321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.018349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.018498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.018523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.018651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.018676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.018889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.018917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.019091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.019116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.019235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.019276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.019409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.019437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.019569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.019593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.019738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.019762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.019920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.019948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.020119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.020144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.020306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.020334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.020490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.020517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.020684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.020709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.020872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.020906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.021060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.021085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.021262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.021287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.021458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.021483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.021629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.021653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.021794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.021819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.022017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.022046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.576 qpair failed and we were unable to recover it. 00:25:34.576 [2024-07-15 10:38:29.022209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.576 [2024-07-15 10:38:29.022237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.022401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.022426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.022588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.022615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.022766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.022794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.022976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.023001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.023156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.023195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.023326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.023354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.023492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.023517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.023633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.023658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.023838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.023866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.024015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.024040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.024192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.024236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.024366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.024397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.024538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.024563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.024708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.024748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.024885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.024913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.025077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.025102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.025270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.025297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.025459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.025486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.025647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.025672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.025831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.025858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.026003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.026031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.026202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.026227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.026374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.026398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.026533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.026559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.026750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.026775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.026969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.026998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.027127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.027154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.027348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.027373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.027549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.027577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.027719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.027746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.027934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.027960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.028100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.028127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.028288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.028316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.028507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.028532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.028641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.577 [2024-07-15 10:38:29.028683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.577 qpair failed and we were unable to recover it. 00:25:34.577 [2024-07-15 10:38:29.028893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.028919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.029063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.029088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.029246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.029274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.029415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.029449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.029612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.029636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.029802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.029829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.030011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.030037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.030165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.030189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.030356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.030384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.030522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.030549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.030720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.030745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.030920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.030949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.031142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.031169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.031337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.031362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.031506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.031546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.031709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.031736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.031927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.031953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.032131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.032159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.032355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.032381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.032554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.032578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.032719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.032746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.032913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.032941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.033102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.033126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.033295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.033323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.033489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.033517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.033682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.033708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.033829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.033853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.034008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.034033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.034204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.034229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.034365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.034393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.034537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.034565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.034706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.034731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.034922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.034951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.035119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.035146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.035302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.035327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.035475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.035500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.035847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.035874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.036030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.036055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.036180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.036205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.036344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.036372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.036521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.036546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.578 [2024-07-15 10:38:29.036697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.578 [2024-07-15 10:38:29.036722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.578 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.036859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.036893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.037056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.037081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.037193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.037239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.037412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.037440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.037613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.037638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.037762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.037786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.037939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.037965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.038084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.038111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.038302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.038330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.038469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.038496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.038667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.038691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.038835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.038860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.039016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.039044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.039215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.039241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.039367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.039391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.039593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.039621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.039764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.039789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.039940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.039966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.040087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.040112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.040235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.040260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.040424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.040452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.040620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.040649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.040813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.040838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.040961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.040986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.041108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.041133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.041332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.041358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.041551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.041579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.041698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.041726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.041873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.041903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.042017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.042042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.042196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.042221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.042365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.042390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.042532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.042557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.042683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.042708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.042849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.042882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.043020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.043045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.043180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.043208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.043408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.043433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.043627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.043655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.043818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.043846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.044017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.579 [2024-07-15 10:38:29.044042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.579 qpair failed and we were unable to recover it. 00:25:34.579 [2024-07-15 10:38:29.044155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.044197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.044360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.044387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.044533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.044558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.044681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.044706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.044833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.044858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.045040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.045065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.045206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.045234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.045397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.045425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.045588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.045612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.045727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.045752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.045932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.045961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.046131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.046156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.046308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.046333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.046480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.046505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.046673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.046698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.046861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.046896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.047064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.047093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.047279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.047304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.047450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.047475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.047603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.047628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.047779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.047804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.047943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.047972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.048156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.048183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.048326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.048350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.048472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.048496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.048641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.048669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.048864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.048895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.049039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.049064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.049224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.049252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.049425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.049453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.049607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.049632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.049800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.049828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.050028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.050053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.050219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.050246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.050401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.050429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.050600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.050627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.050791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.050818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.050979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.051009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.051156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.051181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.051305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.051345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.580 [2024-07-15 10:38:29.051542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.580 [2024-07-15 10:38:29.051570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.580 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.051727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.051755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.051929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.051955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.052094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.052119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.052291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.052316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.052477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.052504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.052636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.052664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.052801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.052827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.052955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.052981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.053130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.053155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.053301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.053326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.053498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.053525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.053683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.053711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.053902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.053928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.054123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.054150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.054280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.054307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.054479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.054504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.054660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.054685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.054857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.054892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.055032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.055056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.055198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.055239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.055382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.055410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.055578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.055603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.055804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.055832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.056034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.056063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.056265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.056290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.056496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.056524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.056646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.056674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.056836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.056861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.056999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.057025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.057171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.057196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.057346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.057371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.057504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.057531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.057696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.057723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.057903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.057929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.058057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.058082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.581 qpair failed and we were unable to recover it. 00:25:34.581 [2024-07-15 10:38:29.058226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.581 [2024-07-15 10:38:29.058251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.058392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.058417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.058604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.058632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.058782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.058809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.059006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.059032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.059223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.059251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.059407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.059434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.059564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.059589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.059787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.059815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.059985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.060011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.060184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.060209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.060375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.060402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.060535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.060562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.060718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.060743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.060909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.060937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.061078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.061105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.061277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.061301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.061432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.061459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.061590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.061617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.061785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.061810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.061966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.061992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.062135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.062164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.062303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.062328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.062448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.062493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.062633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.062661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.062795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.062820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.063002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.063046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.063209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.063237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.063379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.063404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.063546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.063586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.063792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.063817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.063959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.063985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.064175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.064203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.064332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.064360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.064498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.064523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.064681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.064725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.064850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.064883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.065049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.065074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.065242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.065269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.065441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.065468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.065637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.582 [2024-07-15 10:38:29.065662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.582 qpair failed and we were unable to recover it. 00:25:34.582 [2024-07-15 10:38:29.065851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.065893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.066034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.066062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.066240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.066265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.066414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.066441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.066607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.066635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.066802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.066827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.067046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.067074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.067269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.067296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.067476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.067501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.067615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.067657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.067821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.067848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.068032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.068057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.068201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.068242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.068429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.068456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.068603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.068629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.068803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.068831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.069002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.069028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.069144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.069169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.069292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.069317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.069482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.069509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.069676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.069701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.069895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.069929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.070090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.070118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.070290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.070315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.070457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.070498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.070664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.070691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.070825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.070850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.071007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.071049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.071209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.071236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.071414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.071439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.071557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.071581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.071704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.071729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.071874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.071906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.072071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.072098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.072261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.072289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.072430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.072455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.072607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.072632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.072811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.072839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.073011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.073037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.073159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.073200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.073335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.073363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.583 qpair failed and we were unable to recover it. 00:25:34.583 [2024-07-15 10:38:29.073536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.583 [2024-07-15 10:38:29.073561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.073702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.073727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.073872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.073913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.074101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.074126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.074256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.074297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.074483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.074510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.074645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.074671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.074814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.074861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.075076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.075101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.075274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.075299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.075467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.075495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.075648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.075676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.075905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.075945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.076093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.076118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.076284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.076312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.076477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.076502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.076660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.076687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.076851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.076885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.077055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.077080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.077218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.077258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.077386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.077414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.077590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.077616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.077758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.077786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.077943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.077972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.078165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.078190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.078308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.078351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.078490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.078518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.078664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.078689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.078817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.078842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.079025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.079053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.079220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.079245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.079438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.079465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.079656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.079683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.079847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.079872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.080059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.080087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.080251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.080279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.080445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.080470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.080674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.080702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.080885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.080913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.081081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.081105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.081272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.081300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.081466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.584 [2024-07-15 10:38:29.081495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.584 qpair failed and we were unable to recover it. 00:25:34.584 [2024-07-15 10:38:29.081690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.081715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.081849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.081884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.082056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.082081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.082227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.082252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.082411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.082439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.082601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.082629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.082791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.082820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.083008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.083037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.083237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.083262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.083408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.083433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.083553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.083595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.083757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.083784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.083925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.083950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.084105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.084130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.084255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.084279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.084396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.084421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.084571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.084614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.084733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.084761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.084908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.084933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.085107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.085131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.085275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.085303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.085470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.085496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.085619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.085660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.085800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.085829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.086008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.086034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.086185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.086210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.086352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.086392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.086535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.086561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.086761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.086788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.086956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.086986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.087148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.087173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.087317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.087359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.087496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.087523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.087667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.087696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.087841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.585 [2024-07-15 10:38:29.087888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.585 qpair failed and we were unable to recover it. 00:25:34.585 [2024-07-15 10:38:29.088054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.088082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.088242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.088267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.088457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.088484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.088642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.088670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.088837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.088862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.089013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.089038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.089192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.089220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.089351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.089375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.089516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.089541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.089691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.089719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.089889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.089915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.090076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.090104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.090240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.090268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.090439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.090464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.090587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.090628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.090784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.090812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.090953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.090979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.091124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.091149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.091324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.091353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.091524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.091550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.091716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.091744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.091935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.091961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.092086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.092112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.092238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.092263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.092405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.092430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.092582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.092607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.092773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.092801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.092965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.092994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.093180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.093204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.093390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.093417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.093546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.093574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.093714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.093738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.093889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.093915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.094055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.094084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.094249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.094274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.094438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.094467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.094594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.094622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.094822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.094846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.094983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.095012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.095150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.095182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.095378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.586 [2024-07-15 10:38:29.095403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.586 qpair failed and we were unable to recover it. 00:25:34.586 [2024-07-15 10:38:29.095541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.095569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.095732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.095761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.095909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.095935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.096084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.096126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.096296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.096323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.096492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.096516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.096694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.096722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.096919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.096945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.097102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.097127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.097250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.097292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.097433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.097461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.097635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.097660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.097805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.097833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.098017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.098042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.098159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.098184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.098324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.098368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.098506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.098534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.098674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.098700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.098811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.098836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.098995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.099020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.099171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.099195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.099360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.099388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.099513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.099542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.099703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.099728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.099924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.099953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.100093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.100121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.100323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.100348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.100491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.100519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.100686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.100713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.100886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.100912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.101087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.101112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.101288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.101316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.101497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.101522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.101680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.101708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.101868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.101903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.102065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.102090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.102286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.102313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.102478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.102506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.102667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.102692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.102817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.102859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.103051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.103079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.103206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.587 [2024-07-15 10:38:29.103231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.587 qpair failed and we were unable to recover it. 00:25:34.587 [2024-07-15 10:38:29.103381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.103421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.103558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.103585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.103813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.103840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.104019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.104045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.104190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.104218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.104384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.104410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.104600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.104628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.104786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.104813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.104962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.104987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.105132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.105157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.105324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.105351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.105526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.105552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.105722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.105750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.105918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.105946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.106096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.106120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.106282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.106307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.106477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.106505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.106704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.106729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.106868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.106905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.107044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.107072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.107244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.107269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.107394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.107421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.107593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.107635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.107803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.107828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.107996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.108029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.108223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.108247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.108365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.108390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.108506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.108531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.108697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.108724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.108921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.108947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.109113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.109138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.109316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.109340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.109496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.109521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.109662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.109691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.109826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.109854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.110032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.110057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.110245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.110272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.110437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.110465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.110627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.110652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.110842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.110869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.111044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.588 [2024-07-15 10:38:29.111072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.588 qpair failed and we were unable to recover it. 00:25:34.588 [2024-07-15 10:38:29.111233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.111258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.111423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.111450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.111606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.111633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.111769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.111793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.111935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.111961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.112148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.112173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.112313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.112338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.112500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.112528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.112686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.112713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.112851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.112884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.113009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.113034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.113230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.113258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.113428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.113452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.113646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.113674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.113802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.113831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.114009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.114035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.114163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.114205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.114393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.114418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.114589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.114614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.114748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.114775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.114954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.114979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.115120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.115145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.115300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.115327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.115452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.115479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.115678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.115704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.115871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.115906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.116071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.116098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.116248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.116273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.116426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.116450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.116642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.116670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.116839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.116864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.117000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.117025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.117153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.117178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.117336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.117361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.117554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.117581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.117741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.117768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.117962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.117988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.118167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.118194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.118353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.118380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.118539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.118564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.589 qpair failed and we were unable to recover it. 00:25:34.589 [2024-07-15 10:38:29.118680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.589 [2024-07-15 10:38:29.118705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.118889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.118914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.119062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.119087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.119250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.119278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.119420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.119448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.119617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.119641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.119794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.119822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.119962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.119991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.120146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.120170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.120316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.120358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.120496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.120523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.120664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.120693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.120812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.120837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.121002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.121027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.121169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.121194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.121385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.121413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.121545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.121572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.121760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.121788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.121964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.121990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.122148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.122173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.122320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.122344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.122482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.122510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.122644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.122672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.122844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.122869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.123003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.123028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.123204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.123229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.123370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.123395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.123508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.123548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.123715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.123743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.123910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.123936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.124060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.124101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.124287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.124315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.124483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.124508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.124684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.124711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.124836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.124863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.125037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.125062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.125228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.125255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.125391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.125419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.125565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.125591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.125719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.125744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.125923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.125951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.590 [2024-07-15 10:38:29.126120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.590 [2024-07-15 10:38:29.126145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.590 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.126305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.126333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.126501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.126528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.126700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.126725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.126841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.126888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.127055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.127083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.127251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.127276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.127443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.127470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.127629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.127657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.127845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.127892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.128041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.128065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.128202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.128246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.128423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.128449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.128575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.128618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.128784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.128812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.128952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.128978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.129152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.129176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.129325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.129353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.129525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.129550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.129710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.129737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.129905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.129933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.130092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.130117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.130276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.130304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.130492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.130520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.130701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.130726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.130847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.130873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.131035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.131060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.131212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.131236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.131402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.131430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.131563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.131591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.131730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.131754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.131902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.131944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.132112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.132139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.132283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.132308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.132435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.132460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.132712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.132739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.132938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.132964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.133144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.133185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.133341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.133373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.133565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.133590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.133785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.133812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.133967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.133995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.134163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.134188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.134302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.134344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.134508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.591 [2024-07-15 10:38:29.134535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.591 qpair failed and we were unable to recover it. 00:25:34.591 [2024-07-15 10:38:29.134675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.134700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.134862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.134910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.135062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.135090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.135239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.135263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.135409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.135450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.135605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.135633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.135806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.135834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.136005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.136031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.136192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.136219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.136356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.136382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.136532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.136573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.136738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.136765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.136939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.136965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.137104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.137128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.137316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.137344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.137538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.137562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.137732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.137760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.137920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.137946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.138071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.138096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.138217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.138242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.138417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.138446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.138648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.138673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.138835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.138862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.139038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.139063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.139184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.139208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.139357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.139400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.139588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.139616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.139760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.139784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.139936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.139980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.140160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.140185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.140344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.140369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.140531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.140559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.140718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.140745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.140888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.140914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.141057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.141086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.141211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.141235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.141394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.141419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.141554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.141581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.141709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.141737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.141902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.141928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.142058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.142099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.142264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.142292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.142483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.142508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.142648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.142675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.142841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.592 [2024-07-15 10:38:29.142869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.592 qpair failed and we were unable to recover it. 00:25:34.592 [2024-07-15 10:38:29.143021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.143046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.143238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.143265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.143456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.143483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.143636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.143660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.143805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.143846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.144017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.144042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.144184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.144209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.144360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.144385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.144525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.144549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.144688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.144713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.144905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.144934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.145099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.145126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.145298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.145322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.145432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.145474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.145612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.145640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.145804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.145829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.145987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.146021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.146182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.146210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.146405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.146430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.146587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.146614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.146805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.146833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.146983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.147008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.147161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.147185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.147364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.147391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.147588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.147613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.147778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.147805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.147967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.147995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.148159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.148184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.148328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.148356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.148513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.148540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.148770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.148798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.148995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.149020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.149191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.149218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.149379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.149403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.149556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.149580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.149708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.149733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.149889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.149932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.150058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.150083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.150228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.150253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.150398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.150422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.593 [2024-07-15 10:38:29.150592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.593 [2024-07-15 10:38:29.150620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.593 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.150784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.150811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.151000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.151026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.151143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.151186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.151354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.151382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.151525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.151551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.151749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.151777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.151907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.151936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.152123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.152147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.152324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.152351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.152486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.152514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.152711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.152736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.152869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.152913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.153114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.153139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.153287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.153312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.153478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.153505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.153629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.153657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.153792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.153821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.153952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.153977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.154090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.154115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.154291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.154316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.154440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.154465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.154616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.154640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.154817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.154842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.154969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.154994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.155146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.155171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.155377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.155402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.155568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.155596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.594 [2024-07-15 10:38:29.155767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.594 [2024-07-15 10:38:29.155795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.594 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.155959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.155985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.156134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.156176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.156317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.156345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.156514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.156540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.156660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.156701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.156864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.156898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.157032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.157056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.157244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.157272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.157437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.157465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.157629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.157654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.157814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.157842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.158000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.158024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.158193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.158218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.158408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.158435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.158573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.158600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.158757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.158789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.158932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.158958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.159116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.159156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.159351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.159376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.159518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.159546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.159724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.159751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.159921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.159947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.160074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.160099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.160231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.160255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.160402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.160427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.160550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.160592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.160732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.160759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.160926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.595 [2024-07-15 10:38:29.160951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.595 qpair failed and we were unable to recover it. 00:25:34.595 [2024-07-15 10:38:29.161106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.161131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.161281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.161307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.161432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.161456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.161586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.161627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.161761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.161789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.161939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.161964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.162146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.162171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.162335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.162362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.162513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.162538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.162661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.162685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.162860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.162894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.163041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.163066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.163192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.163216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.163361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.163388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.163587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.163612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.163757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.163785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.163986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.164012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.164162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.164187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.164358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.164385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.164503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.164531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.164670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.164695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.164821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.164845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.164965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.164991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.165137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.165162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.165288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.165313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.165440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.165466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.165590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.165615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.165744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.165786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.596 qpair failed and we were unable to recover it. 00:25:34.596 [2024-07-15 10:38:29.165964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.596 [2024-07-15 10:38:29.165993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.166116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.166141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.166289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.166331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.166496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.166523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.166667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.166691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.166836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.166884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.167012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.167040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.167192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.167216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.167343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.167368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.167543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.167570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.167706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.167730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.167890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.167916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.168065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.168090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.168221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.168246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.168371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.168418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.168577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.168605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.168812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.168837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.168994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.169024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.169188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.169216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.169386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.169410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.169531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.169572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.597 [2024-07-15 10:38:29.169713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.597 [2024-07-15 10:38:29.169742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.597 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.169917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.169943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.170068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.170093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.170243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.170268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.170423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.170448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.170578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.170602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.170749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.170774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.170900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.170925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.171100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.171125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.171267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.171294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.171436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.171460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.171578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.171603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.171753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.171781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.171978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.172003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.172132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.172157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.172333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.172361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.172498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.172523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.172649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.172674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.172816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.172844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.173038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.173064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.173206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.173235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.173399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.888 [2024-07-15 10:38:29.173427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.888 qpair failed and we were unable to recover it. 00:25:34.888 [2024-07-15 10:38:29.173560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.173585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.173713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.173738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.173893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.173921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.174115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.174140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.174262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.174287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.174439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.174464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.174582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.174607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.174731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.174755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.174936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.174964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.175103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.175128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.175250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.175276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.175426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.175453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.175624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.175649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.175776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.175800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.175928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.175955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.176119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.176144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.176276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.176304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.176467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.176495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.176638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.176663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.176788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.176813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.176986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.177011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.177183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.177208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.177368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.177395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.177571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.177596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.177748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.177775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.177950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.177980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.178129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.178170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.178337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.178361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.178487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.178512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.178678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.178707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.178852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.178883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.179078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.179106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.179238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.179266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.179429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.179453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.179600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.179642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.179813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.179841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.179995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.180023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.180174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.180218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.180374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.180401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.180605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.180630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.180772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.889 [2024-07-15 10:38:29.180800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.889 qpair failed and we were unable to recover it. 00:25:34.889 [2024-07-15 10:38:29.180973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.180999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.181120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.181146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.181292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.181333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.181496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.181523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.181680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.181704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.181866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.181902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.182041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.182069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.182210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.182235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.182433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.182460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.182597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.182625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.182796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.182821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.183026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.183072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.183237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.183268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.183439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.183464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.183613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.183638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.183792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.183821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.183971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.183998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.184143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.184185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.184337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.184366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.184542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.184568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.184740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.184769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.184914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.184943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.185112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.185138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.185310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.185338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.185526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.185554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.185708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.185734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.185874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.185922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.186055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.186084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.186230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.186256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.186387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.186413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.186534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.186559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.186712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.186741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.186939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.186965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.187113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.187139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.187293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.187319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.187487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.187515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.187684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.187712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.187889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.187916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.188047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.188078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.188248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.188277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.188425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.890 [2024-07-15 10:38:29.188450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.890 qpair failed and we were unable to recover it. 00:25:34.890 [2024-07-15 10:38:29.188598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.188641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.188809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.188837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.189011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.189037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.189230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.189258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.189395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.189424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.189619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.189644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.189816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.189844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.189990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.190019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.190207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.190233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.190384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.190410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.190581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.190610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.190765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.190791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.190963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.190989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.191143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.191173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.191343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.191370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.191538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.191566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.191704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.191733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.191891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.191917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.192067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.192094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.192264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.192293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.192464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.192490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.192640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.192665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.192793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.192819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.192996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.193022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.193173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.193202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.193392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.193421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.193591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.193617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.193767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.193795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.193956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.193985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.194123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.194148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.194298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.194339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.194501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.194529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.194691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.194720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.194864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.194899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.195035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.195060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.195209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.195234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.195405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.195433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.195602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.195635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.195831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.195857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.196006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.196032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.891 [2024-07-15 10:38:29.196157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.891 [2024-07-15 10:38:29.196201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.891 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.196373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.196399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.196530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.196556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.196729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.196757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.196902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.196928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.197057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.197083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.197274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.197302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.197477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.197504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.197668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.197697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.197845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.197874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.198081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.198107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.198259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.198288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.198433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.198462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.198636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.198662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.198814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.198840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.198967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.198994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.199171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.199197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.199339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.199369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.199545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.199572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.199722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.199748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.199893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.199922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.200051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.200080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.200223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.200248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.200445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.200473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.200654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.200683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.200820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.200845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.201031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.201060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.201228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.201257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.201397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.201423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.201571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.201597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.201765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.201794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.201975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.202002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.202174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.892 [2024-07-15 10:38:29.202202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.892 qpair failed and we were unable to recover it. 00:25:34.892 [2024-07-15 10:38:29.202359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.202387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.202534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.202559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.202716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.202742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.202907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.202937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.203083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.203112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.203301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.203329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.203468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.203497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.203691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.203716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.203887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.203932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.204081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.204107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.204255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.204280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.204444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.204474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.204608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.204637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.204792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.204818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.204968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.204994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.205130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.205160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.205323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.205348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.205479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.205522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.205665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.205694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.205858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.205890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.206070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.206113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.206245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.206274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.206424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.206450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.206620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.206648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.206830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.206858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.207022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.207048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.207168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.207193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.207347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.207375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.207520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.207545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.207674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.207699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.207886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.207912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.208040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.208065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.208226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.208255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.208412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.208440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.208587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.208613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.208761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.208786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.208935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.208965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.209140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.209166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.209308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.209333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.209462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.209487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.209634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.893 [2024-07-15 10:38:29.209661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.893 qpair failed and we were unable to recover it. 00:25:34.893 [2024-07-15 10:38:29.209805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.209830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.209977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.210003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.210120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.210146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.210299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.210348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.210518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.210546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.210709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.210737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.210875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.210927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.211076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.211102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.211279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.211305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.211483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.211511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.211642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.211671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.211822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.211849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.212027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.212053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.212193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.212223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.212364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.212390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.212514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.212540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.212685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.212712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.212873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.212906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.213028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.213069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.213248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.213276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.213416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.213441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.213594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.213619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.213767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.213792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.213966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.213993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.214157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.214185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.214322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.214351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.214493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.214519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.214712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.214740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.214901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.214930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.215095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.215120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.215257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.215283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.215434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.215460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.215621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.215646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.215765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.215790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.215968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.215997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.216163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.216189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.216366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.216408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.216588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.216616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.216785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.216810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.216964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.216991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.217110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.894 [2024-07-15 10:38:29.217136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.894 qpair failed and we were unable to recover it. 00:25:34.894 [2024-07-15 10:38:29.217327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.217352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.217497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.217522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.217672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.217702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.217892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.217938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.218067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.218092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.218260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.218287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.218457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.218483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.218644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.218673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.218801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.218829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.218975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.219002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.219122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.219148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.219320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.219348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.219490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.219516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.219667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.219711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.219840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.219869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.220048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.220074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.220225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.220253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.220413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.220441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.220584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.220609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.220723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.220748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.220922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.220951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.221127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.221154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.221324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.221354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.221546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.221574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.221736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.221764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.221945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.221972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.222120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.222146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.222271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.222298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.222418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.222462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.222650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.222695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.222868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.222903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.223037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.223071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.223271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.223308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.223467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.223492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.223662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.223690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.223826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.223857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.224008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.224034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.224226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.224254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.224475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.224526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.224682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.224707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.895 [2024-07-15 10:38:29.224836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.895 [2024-07-15 10:38:29.224863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.895 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.225033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.225061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.225266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.225295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.225429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.225457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.225616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.225645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.225797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.225822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.225978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.226005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.226132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.226157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.226278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.226303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.226469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.226497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.226653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.226681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.226820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.226846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.227000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.227026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.227200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.227230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.227400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.227426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.227547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.227590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.227732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.227762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.227916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.227942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.228067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.228092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.228221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.228246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.228430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.228455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.228646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.228674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.228829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.228858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.229036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.229062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.229183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.229226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.229383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.229423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.229581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.229607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.229756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.229782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.229991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.230021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.230195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.230221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.230352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.230379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.230533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.230574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.230742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.230769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.230953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.230983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.231174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.231203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.231391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.231417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.231560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.231590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.231755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.231784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.231947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.231974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.232128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.232154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.896 [2024-07-15 10:38:29.232342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.896 [2024-07-15 10:38:29.232371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.896 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.232541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.232566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.232689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.232738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.232914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.232943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.233084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.233109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.233256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.233299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.233464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.233492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.233651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.233676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.233823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.233852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.234036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.234065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.234218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.234243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.234380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.234406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.234553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.234583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.234763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.234789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.234962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.234996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.235185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.235214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.235417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.235443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.235586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.235615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.235801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.235829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.236008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.236035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.236159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.236201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.236336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.236365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.236535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.236560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.236711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.236737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.236903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.236945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.237093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.237119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.237269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.237312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.237468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.237510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.237696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.237723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.237854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.237892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.238050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.238079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.238229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.238256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.238379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.238406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.238623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.238649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.238781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.238806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.238939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.897 [2024-07-15 10:38:29.238982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.897 qpair failed and we were unable to recover it. 00:25:34.897 [2024-07-15 10:38:29.239150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.239178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.239329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.239354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.239476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.239501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.239706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.239734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.239905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.239931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.240059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.240085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.240224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.240257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.240406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.240433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.240577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.240618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.240784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.240812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.240949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.240976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.241111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.241137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.241303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.241331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.241502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.241527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.241656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.241682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.241810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.241836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.241965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.241991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.242136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.242162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.242304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.242333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.242477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.242503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.242662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.242687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.242857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.242896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.243074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.243099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.243225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.243270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.243437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.243492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.243637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.243663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.243793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.243818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.243966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.243993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.244139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.244166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.244306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.244334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.244509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.244535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.244659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.244684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.244830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.244874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.245039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.245079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.245286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.245312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.245462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.245489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.245652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.245681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.245838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.245862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.246003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.246049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.246239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.246289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.246458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.898 [2024-07-15 10:38:29.246483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.898 qpair failed and we were unable to recover it. 00:25:34.898 [2024-07-15 10:38:29.246600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.246643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.246832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.246862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.247014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.247039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.247157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.247199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.247386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.247436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.247603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.247633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.247774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.247800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.247926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.247952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.248104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.248129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.248251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.248276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.248400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.248424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.248542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.248567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.248683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.248710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.248887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.248915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.249091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.249124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.249249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.249275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.249425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.249451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.249621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.249649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.249783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.249810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.249950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.249979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.250108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.250136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.250312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.250342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.250510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.250538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.250703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.250744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.250913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.250941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.251091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.251117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.251247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.251272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.251410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.251457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.251643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.251687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.251813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.251839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.251998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.252043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.252252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.252296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.252525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.252575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.252697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.252722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.252887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.252913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.253061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.253086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.253259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.253302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.253473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.253516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.253719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.253763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.253931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.253960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.899 [2024-07-15 10:38:29.254138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.899 [2024-07-15 10:38:29.254165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.899 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.254318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.254363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.254565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.254608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.254771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.254796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.254943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.254987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.255153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.255201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.255399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.255442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.255622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.255666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.255791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.255817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.255957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.256015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.256192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.256222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.256357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.256388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.256525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.256554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.256716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.256745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.256890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.256934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.257093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.257122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.257309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.257337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.257489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.257517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.257691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.257722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.257910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.257936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.258105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.258148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.258322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.258367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.258516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.258559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.258709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.258735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.258883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.258927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.259068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.259111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.259248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.259291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.259466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.259509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.259634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.259659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.259804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.259829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.259975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.260020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.260187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.260215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.260426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.260470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.260651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.260701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.260855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.260890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.261050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.261078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.261237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.261264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.261431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.261472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.261657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.261684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.261843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.261870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.262041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.900 [2024-07-15 10:38:29.262066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.900 qpair failed and we were unable to recover it. 00:25:34.900 [2024-07-15 10:38:29.262241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.262269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.262529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.262578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.262773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.262821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.262981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.263007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.263134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.263176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.263345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.263373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.263533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.263563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.263729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.263756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.263954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.263980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.264125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.264150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.264273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.264315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.264479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.264507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.264688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.264715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.264836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.264863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.265002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.265027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.265156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.265197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.265357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.265385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.265519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.265548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.265679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.265711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.265842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.265870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.266050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.266075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.266204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.266229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.266356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.266399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.266588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.266615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.266779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.266807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.266953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.266979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.267114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.267138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.267317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.267342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.267510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.267549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.267724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.267751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.267894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.267920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.268066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.268091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.268324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.268352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.268532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.268560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.268702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.268729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.901 [2024-07-15 10:38:29.268868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.901 [2024-07-15 10:38:29.268917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.901 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.269035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.269060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.269179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.269204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.269388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.269416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.269577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.269604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.269771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.269799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.269968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.269993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.270121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.270146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.270298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.270345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.270477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.270506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.270663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.270691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.270823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.270852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.271000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.271025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.271151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.271175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.271339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.271367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.271532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.271559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.271712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.271739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.271948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.271974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.272098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.272123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.272246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.272271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.272422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.272447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.272570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.272595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.272740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.272768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.272935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.272976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.273113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.273145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.273268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.273294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.273417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.273462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.273599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.273628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.273769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.273813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.273981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.274007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.274130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.274172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.274336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.274361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.274571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.274620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.274756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.274783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.274927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.274952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.275076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.275101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.275215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.275239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.275406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.275434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.275566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.275593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.275770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.275794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.275947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.275972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.276098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.902 [2024-07-15 10:38:29.276123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.902 qpair failed and we were unable to recover it. 00:25:34.902 [2024-07-15 10:38:29.276262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.276289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.276447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.276475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.276636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.276663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.276823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.276850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.277003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.277029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.277146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.277170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.277290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.277315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.277449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.277474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.277597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.277622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.277738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.277766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.277918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.277945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.278088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.278116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.278306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.278334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.278498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.278523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.278647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.278671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.278847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.278875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.279022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.279047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.279188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.279228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.279363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.279391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.279540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.279565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.279714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.279738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.279889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.279918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.280084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.280109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.280263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.280288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.280438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.280481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.280619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.280643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.280772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.280797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.280930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.280956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.281136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.281160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.281315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.281340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.281470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.281495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.281636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.281660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.281804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.281831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.281992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.282018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.282161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.282186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.282305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.282346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.282487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.282515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.282656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.282681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.282857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.282907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.283040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.283068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.283240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.283264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.903 qpair failed and we were unable to recover it. 00:25:34.903 [2024-07-15 10:38:29.283410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.903 [2024-07-15 10:38:29.283435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.283557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.283582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.283766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.283791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.283924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.283964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.284094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.284121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.284319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.284345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.284474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.284500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.284653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.284678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.284852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.284883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.285089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.285118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.285307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.285336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.285511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.285536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.285708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.285734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.285930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.285960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.286133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.286158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.286323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.286352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.286512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.286540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.286683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.286710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.286919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.286964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.287134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.287163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.287309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.287334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.287478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.287518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.287707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.287740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.287907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.287933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.288076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.288101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.288276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.288304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.288476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.288501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.288691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.288735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.288942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.288971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.289133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.289158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.289312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.289337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.289475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.289500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.289643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.289668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.289809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.289836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.289989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.290016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.290166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.290191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.290394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.290422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.290551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.290579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.290769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.290794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.290967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.290995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.291158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.291186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.904 [2024-07-15 10:38:29.291358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.904 [2024-07-15 10:38:29.291383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.904 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.291550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.291601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.291788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.291816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.291992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.292019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.292169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.292196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.292363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.292391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.292540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.292565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.292703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.292728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.292912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.292943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.293120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.293145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.293269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.293310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.293479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.293505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.293626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.293651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.293853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.293887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.294021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.294049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.294227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.294251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.294454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.294506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.294678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.294705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.294853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.294882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.295028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.295069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.295241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.295268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.295458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.295482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.295618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.295650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.295809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.295836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.296007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.296033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.296206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.296258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.296416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.296444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.296584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.296609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.296724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.296749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.296909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.296947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.297147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.297173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.297290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.297332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.297466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.297494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.297657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.297682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.297838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.297865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.298045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.905 [2024-07-15 10:38:29.298073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.905 qpair failed and we were unable to recover it. 00:25:34.905 [2024-07-15 10:38:29.298227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.298252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.298401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.298443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.298564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.298592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.298773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.298801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.298985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.299010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.299121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.299146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.299334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.299359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.299570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.299620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.299762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.299789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.299959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.299984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.300131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.300156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.300333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.300358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.300481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.300506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.300674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.300703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.300824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.300849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.301037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.301062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.301223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.301250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.301410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.301438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.301575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.301600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.301745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.301785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.301973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.302002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.302145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.302170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.302295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.302320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.302439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.302464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.302646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.302671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.302839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.302867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.303049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.303074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.303230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.303255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.303440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.303489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.303653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.303680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.303827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.303853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.303994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.304021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.304188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.304216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.304365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.304394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.304562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.304603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.304822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.304847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.305017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.305043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.305238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.305266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.305430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.305458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.305597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.305622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.305733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.305759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.906 [2024-07-15 10:38:29.305953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.906 [2024-07-15 10:38:29.305978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.906 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.306129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.306154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.306330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.306355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.306475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.306500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.306653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.306678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.306827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.306852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.307026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.307055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.307224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.307249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.307409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.307437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.307602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.307630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.307798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.307823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.307975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.308000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.308150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.308192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.308350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.308379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.308568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.308596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.308784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.308812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.308949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.308974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.309132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.309157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.309322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.309350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.309508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.309533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.309679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.309704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.309818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.309843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.309989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.310015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.310167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.310211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.310342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.310371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.310539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.310564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.310726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.310754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.310899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.310939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.311074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.311099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.311246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.311287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.311463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.311488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.311629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.311654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.311784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.311827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.312009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.312034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.312179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.312204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.312373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.312400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.312551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.312578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.312718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.312743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.312862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.312893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.313057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.313085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.313278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.313303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.907 [2024-07-15 10:38:29.313433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.907 [2024-07-15 10:38:29.313459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.907 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.313577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.313602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.313727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.313751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.313881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.313907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.314105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.314133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.314324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.314348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.314520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.314547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.314718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.314746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.314888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.314925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.315060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.315104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.315285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.315310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.315482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.315507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.315631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.315656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.315809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.315835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.315982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.316008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.316172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.316201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.316365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.316393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.316587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.316612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.316734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.316759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.316906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.316932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.317060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.317084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.317208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.317233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.317374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.317399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.317538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.317563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.317720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.317748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.317929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.317955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.318081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.318106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.318257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.318283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.318427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.318452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.318624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.318649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.318814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.318842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.318993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.319019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.319172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.319197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.319323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.319365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.319520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.319547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.319737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.319761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.319926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.319969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.320095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.320123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.320297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.320322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.320511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.320538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.320687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.320719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.320882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.908 [2024-07-15 10:38:29.320907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.908 qpair failed and we were unable to recover it. 00:25:34.908 [2024-07-15 10:38:29.321073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.321101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.321242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.321270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.321436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.321461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.321575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.321600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.321752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.321779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.321946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.321971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.322136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.322163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.322294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.322322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.322519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.322544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.322710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.322738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.322920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.322967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.323163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.323188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.323330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.323359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.323519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.323547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.323683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.323725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.323882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.323908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.324084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.324109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.324253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.324278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.324444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.324472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.324672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.324699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.324855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.324897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.325069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.325094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.325264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.325291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.325457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.325482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.325603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.325643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.325831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.325859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.326022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.326048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.326178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.326203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.326362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.326390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.326552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.326577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.326738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.326765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.326895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.326927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.327089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.327114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.327231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.327272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.327433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.327460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.909 [2024-07-15 10:38:29.327653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.909 [2024-07-15 10:38:29.327677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.909 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.327869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.327904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.328049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.328076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.328236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.328261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.328434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.328462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.328622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.328651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.328841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.328870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.329037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.329062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.329271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.329296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.329439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.329464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.329586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.329627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.329788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.329815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.329985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.330011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.330137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.330183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.330316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.330344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.330504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.330529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.330689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.330717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.330886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.330915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.331088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.331112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.331280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.331308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.331472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.331499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.331671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.331697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.331816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.331841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.331962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.331988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.332135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.332160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.332329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.332357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.332495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.332523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.332657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.332681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.332801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.332825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.333032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.333057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.333199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.333224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.333397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.333430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.333565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.333593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.333776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.333801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.333924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.333950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.334120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.334145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.334271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.334296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.334487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.334515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.334656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.334684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.334830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.334855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.335013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.335038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.335191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.910 [2024-07-15 10:38:29.335216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.910 qpair failed and we were unable to recover it. 00:25:34.910 [2024-07-15 10:38:29.335335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.335359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.335501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.335543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.335709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.335737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.335933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.335959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.336102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.336127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.336264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.336298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.336436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.336461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.336585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.336609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.336818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.336843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.337005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.337031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.337144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.337186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.337365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.337389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.337539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.337564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.337677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.337718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.337850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.337885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.338063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.338088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.338269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.338296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.338466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.338493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.338630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.338655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.338767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.338792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.338956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.338985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.339177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.339201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.339337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.339365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.339528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.339556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.339722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.339747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.339881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.339907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.340101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.340127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.340268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.340292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.340464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.340491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.340677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.340705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.340899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.340930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.341043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.341067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.341227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.341254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.341453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.341478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.341648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.341676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.341867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.341907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.342072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.342097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.342254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.342281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.342445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.342473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.342639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.342664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.911 [2024-07-15 10:38:29.342830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.911 [2024-07-15 10:38:29.342858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.911 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.343032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.343057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.343204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.343228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.343351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.343393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.343527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.343554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.343747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.343772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.343940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.343968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.344137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.344165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.344355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.344380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.344504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.344546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.344733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.344761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.344922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.344947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.345071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.345096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.345238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.345265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.345462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.345488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.345678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.345706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.345864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.345897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.346087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.346115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.346280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.346308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.346494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.346522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.346686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.346711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.346911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.346940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.347070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.347097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.347280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.347307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.347501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.347529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.347684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.347711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.347954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.347979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.348110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.348134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.348313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.348342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.348546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.348571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.348738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.348765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.348946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.348972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.349153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.349178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.349325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.349354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.349553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.349581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.349759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.349784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.349976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.350004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.350150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.350179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.350326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.350351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.350528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.350554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.350700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.350727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.350902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.912 [2024-07-15 10:38:29.350927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.912 qpair failed and we were unable to recover it. 00:25:34.912 [2024-07-15 10:38:29.351097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.351125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.351310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.351338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.351525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.351550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.351714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.351742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.351872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.351905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.352079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.352103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.352297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.352325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.352450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.352478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.352621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.352647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.352795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.352837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.352988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.353013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.353156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.353181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.353303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.353328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.353475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.353500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.353675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.353699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.353846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.353874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.354069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.354100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.354276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.354301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.354462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.354490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.354629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.354657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.354857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.354892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.355068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.355095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.355254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.355282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.355430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.355455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.355605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.355630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.355796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.355824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.355971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.355996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.356156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.356199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.356360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.356403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.356578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.356604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.356780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.356809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.356949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.356978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.357126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.357160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.357287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.357312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.357454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.357479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.357625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.357651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.357793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.357820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.357986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.358015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.358160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.358185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.358379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.358407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.358657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.913 [2024-07-15 10:38:29.358707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.913 qpair failed and we were unable to recover it. 00:25:34.913 [2024-07-15 10:38:29.358909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.358936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.359080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.359108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.359289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.359357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.359519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.359543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.359739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.359766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.359943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.359969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.360117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.360142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.360270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.360315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.360474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.360509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.360652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.360679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.360866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.360902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.361039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.361066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.361205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.361232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.361422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.361449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.361626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.361653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.361793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.361817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.361941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.361966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.362122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.362148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.362270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.362295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.362475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.362509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.362684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.362711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.362906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.362937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.363102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.363129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.363399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.363457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.363616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.363641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.363768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.363792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.363931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.363972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.364176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.364203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.364378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.364405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.364583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.364642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.364792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.364817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.364995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.365021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.365232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.365257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.365415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.365443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.365610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.914 [2024-07-15 10:38:29.365638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.914 qpair failed and we were unable to recover it. 00:25:34.914 [2024-07-15 10:38:29.365795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.365823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.365965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.365991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.366147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.366188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.366388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.366448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.366652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.366677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.366845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.366873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.367024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.367051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.367207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.367232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.367377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.367405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.367570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.367598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.367824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.367852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.368039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.368065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.368231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.368260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.368454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.368479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.368660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.368688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.368891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.368933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.369051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.369076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.369229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.369272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.369470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.369495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.369649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.369674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.369794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.369837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.370004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.370044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.370170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.370197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.370393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.370421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.370673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.370733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.370889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.370915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.371088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.371113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.371325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.371374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.371557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.371583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.371711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.371753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.371925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.371967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.372125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.372150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.372315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.372343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.372512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.372540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.372720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.372746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.372910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.372942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.373084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.373110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.373254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.373279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.373452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.373479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.373610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.373638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.373799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.915 [2024-07-15 10:38:29.373828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.915 qpair failed and we were unable to recover it. 00:25:34.915 [2024-07-15 10:38:29.373967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.373993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.374147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.374172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.374311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.374336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.374498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.374525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.374686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.374714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.374887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.374913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.375061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.375086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.375238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.375272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.375441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.375466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.375632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.375659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.375832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.375857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.376053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.376078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.376261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.376288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.376502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.376552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.376692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.376716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.376844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.376869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.377034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.377059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.377203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.377227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.377399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.377427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.377679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.377732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.377902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.377928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.378062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.378088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.378239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.378264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.378412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.378437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.378584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.378609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.378787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.378829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.379036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.379063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.379201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.379229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.379374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.379402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.379569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.379594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.379734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.379761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.379939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.379965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.380088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.380113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.380261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.380286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.380431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.380456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.380581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.380605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.380728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.380752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.380865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.380896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.381039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.381064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.381250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.381278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.381420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.381448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.916 qpair failed and we were unable to recover it. 00:25:34.916 [2024-07-15 10:38:29.381604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.916 [2024-07-15 10:38:29.381630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.381799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.381827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.381967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.381993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.382108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.382134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.382285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.382310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.382467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.382509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.382653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.382678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.382855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.382892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.383038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.383064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.383213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.383238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.383412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.383440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.383566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.383593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.383762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.383786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.383905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.383931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.384074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.384100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.384216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.384241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.384401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.384427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.384571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.384595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.384756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.384784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.384957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.384982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.385128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.385153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.385341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.385367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.385557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.385585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.385719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.385747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.385915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.385940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.386087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.386111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.386276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.386304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.386468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.386494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.386685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.386713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.386851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.386892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.387059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.387084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.387236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.387261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.387445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.387474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.387641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.387667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.387784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.387815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.388013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.388039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.388193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.388218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.388387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.388415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.388579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.388607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.388785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.388810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.389011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.389040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.389196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.917 [2024-07-15 10:38:29.389224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.917 qpair failed and we were unable to recover it. 00:25:34.917 [2024-07-15 10:38:29.389392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.389417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.389560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.389601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.389802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.389827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.389979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.390005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.390174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.390202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.390342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.390369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.390581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.390606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.390779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.390807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.390968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.390996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.391170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.391195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.391321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.391346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.391528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.391553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.391722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.391750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.391923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.391966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.392090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.392115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.392305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.392330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.392466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.392494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.392658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.392686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.392891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.392917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.393080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.393108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.393276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.393304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.393478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.393503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.393617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.393657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.393821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.393849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.394026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.394052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.394211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.394239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.394396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.394424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.394596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.394621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.394779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.394820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.394995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.395020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.395143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.395168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.395298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.395324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.395465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.395490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.395622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.395651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.395845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.395872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.918 qpair failed and we were unable to recover it. 00:25:34.918 [2024-07-15 10:38:29.396015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.918 [2024-07-15 10:38:29.396042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.396218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.396243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.396410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.396441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.396650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.396674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.396815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.396840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.396978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.397021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.397206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.397235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.397400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.397425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.397582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.397610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.397752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.397780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.397947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.397973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.398104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.398129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.398347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.398372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.398517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.398542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.398666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.398710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.398843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.398871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.399051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.399076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.399269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.399297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.399424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.399452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.399650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.399675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.399811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.399845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.400010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.400036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.400157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.400182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.400311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.400351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.400482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.400510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.400672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.400701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.400840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.400887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.401060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.401088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.401285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.401309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.401448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.401476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.401662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.401689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.401888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.401914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.402062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.402087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.402257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.402285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.402458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.402483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.402677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.402705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.402888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.402913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.403086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.403111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.403275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.403303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.403443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.403470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.403669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.403694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.919 [2024-07-15 10:38:29.403846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.919 [2024-07-15 10:38:29.403874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.919 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.404051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.404079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.404254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.404280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.404405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.404446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.404616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.404641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.404826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.404851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.405026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.405056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.405184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.405211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.405342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.405367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.405558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.405585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.405753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.405789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.405931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.405957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.406100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.406126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.406301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.406330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.406496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.406521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.406652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.406678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.406816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.406843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.407046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.407072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.407204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.407229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.407378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.407403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.407518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.407543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.407689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.407730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.407907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.407933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.408081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.408107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.408256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.408284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.408441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.408474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.408624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.408649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.408792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.408817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.409002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.409027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.409151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.409176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.409351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.409378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.409527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.409555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.409712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.409736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.409887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.409930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.410091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.410119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.410273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.410298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.410430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.410455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.410605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.410629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.410749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.410773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.410904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.410947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.411084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.411111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.411254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.920 [2024-07-15 10:38:29.411279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.920 qpair failed and we were unable to recover it. 00:25:34.920 [2024-07-15 10:38:29.411413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.411438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.411605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.411633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.411769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.411794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.411912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.411938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.412111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.412139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.412298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.412323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.412467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.412496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.412662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.412690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.412834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.412859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.413010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.413053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.413241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.413273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.413415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.413439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.413593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.413633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.413768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.413798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.414000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.414026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.414145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.414170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.414382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.414406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.414544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.414568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.414697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.414741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.414883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.414911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.415052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.415077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.415205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.415230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.415409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.415436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.415577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.415601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.415722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.415747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.415918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.415947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.416080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.416105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.416261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.416285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.416416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.416441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.416571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.416599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.416745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.416773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.416943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.416968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.417090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.417115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.417226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.417251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.417390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.417418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.417585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.417610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.417748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.417790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.417930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.417958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.418101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.418126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.418266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.418308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.418470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.921 [2024-07-15 10:38:29.418497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.921 qpair failed and we were unable to recover it. 00:25:34.921 [2024-07-15 10:38:29.418664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.418689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.418851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.418909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.419078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.419107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.419270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.419295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.419420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.419444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.419558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.419583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.419701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.419726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.419901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.419933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.420098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.420126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.420263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.420289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.420429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.420458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.420602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.420639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.420808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.420833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.420973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.421016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.421210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.421238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.421415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.421440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.421566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.421612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.421749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.421777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.421927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.421952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.422103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.422128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.422287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.422312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.422427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.422452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.422571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.422596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.422770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.422794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.422921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.422947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.423055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.423081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.423278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.423308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.423449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.423474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.423601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.423626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.423789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.423817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.423950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.423976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.424121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.424146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.424297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.424325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.424490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.424515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.424673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.424701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.922 [2024-07-15 10:38:29.424830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.922 [2024-07-15 10:38:29.424857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.922 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.425037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.425063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.425193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.425239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.425402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.425430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.425579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.425604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.425728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.425753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.425943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.425972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.426138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.426163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.426307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.426348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.426512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.426540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.426705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.426730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.426849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.426874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.427057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.427085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.427231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.427256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.427443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.427470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.427670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.427698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.427883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.427909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.428044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.428069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.428215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.428240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.428357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.428382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.428552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.428578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.428787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.428815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.428957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.428983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.429106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.429131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.429299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.429326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.429496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.429520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.429639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.429681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.429813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.429841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.430010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.430035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.430186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.430210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.430328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.430354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.430485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.430510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.430665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.430693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.430862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.430907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.431052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.431077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.431274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.431302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.431472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.431496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.431637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.431662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.431785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.431825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.431970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.431996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.432142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.432167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.923 [2024-07-15 10:38:29.432286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.923 [2024-07-15 10:38:29.432326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.923 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.432516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.432543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.432721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.432751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.432908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.432933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.433060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.433085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.433236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.433260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.433429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.433456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.433618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.433646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.433814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.433838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.433978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.434004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.434231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.434258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.434461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.434485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.434643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.434671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.434834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.434862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.435013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.435038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.435189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.435230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.435370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.435398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.435536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.435561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.435674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.435698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.435894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.435923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.436121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.436152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.436321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.436349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.436536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.436563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.436726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.436752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.436874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.436906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.437056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.437083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.437252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.437277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.437403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.437428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.437573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.437598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.437763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.437790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.437966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.437992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.438112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.438137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.438308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.438332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.438490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.438517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.438678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.438705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.438863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.438896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.439049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.439075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.439224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.439249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.439401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.439426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.439590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.439618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.439785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.439813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.439962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.924 [2024-07-15 10:38:29.439989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.924 qpair failed and we were unable to recover it. 00:25:34.924 [2024-07-15 10:38:29.440103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.440128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.440284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.440317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.440455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.440480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.440625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.440651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.440826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.440853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.441063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.441089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.441249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.441277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.441456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.441483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.441641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.441665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.441780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.441804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.441985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.442013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.442184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.442210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.442378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.442406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.442565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.442592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.442789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.442814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.442941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.442967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.443140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.443165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.443344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.443369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.443574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.443601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.443793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.443820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.443957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.443983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.444129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.444154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.444304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.444333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.444513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.444538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.444741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.444768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.444940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.444966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.445118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.445143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.445339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.445367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.445563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.445595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.445736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.445761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.445892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.445918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.446045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.446070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.446217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.446242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.446409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.446437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.446594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.446623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.446793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.446818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.446946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.446972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.447112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.447137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.447263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.447289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.447465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.447507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.447646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.447673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.925 qpair failed and we were unable to recover it. 00:25:34.925 [2024-07-15 10:38:29.447813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.925 [2024-07-15 10:38:29.447838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.447995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.448020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.448207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.448235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.448401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.448426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.448555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.448596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.448757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.448785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.448958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.448984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.449159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.449183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.449376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.449403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.449535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.449561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.449679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.449704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.449885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.449913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.450105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.450130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.450277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.450305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.450425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.450452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.450624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.450649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.450782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.450826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.450994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.451019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.451170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.451196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.451367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.451394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.451581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.451609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.451775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.451803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.451979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.452004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.452154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.452195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.452395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.452420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.452551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.452578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.452717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.452745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.452888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.452914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.453037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.453066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.453250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.453277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.453470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.453494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.453660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.453688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.453874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.453911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.454046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.454071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.454264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.454291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.454423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.454450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.454618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.454643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.454818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.454846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.455050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.455076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.455226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.455250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.455364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.455388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.926 [2024-07-15 10:38:29.455530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.926 [2024-07-15 10:38:29.455557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.926 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.455707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.455732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.455884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.455909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.456063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.456088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.456230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.456255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.456403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.456428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.456548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.456573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.456725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.456750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.456865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.456920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.457106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.457131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.457257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.457282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.457468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.457495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.457677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.457705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.457843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.457868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.458003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.458033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.458184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.458213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.458345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.458371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.458519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.458544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.458720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.458748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.458914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.458940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.459066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.459092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.459262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.459307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.459470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.459494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.459682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.459710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.459870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.459904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.460064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.460089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.460262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.460290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.460430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.460458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.460660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.460685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.460829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.460856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.461049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.461077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.461242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.461267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.461473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.461502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.461668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.461696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.461865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.461897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.462017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.462060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.927 qpair failed and we were unable to recover it. 00:25:34.927 [2024-07-15 10:38:29.462230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.927 [2024-07-15 10:38:29.462258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.462457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.462482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.462610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.462636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.462755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.462779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.462905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.462930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.463102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.463127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.463316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.463342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.463457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.463481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.463639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.463681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.463872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.463907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.464054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.464079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.464265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.464293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.464454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.464481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.464681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.464706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.464867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.464914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.465076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.465104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.465275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.465300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.465451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.465492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.465689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.465716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.465867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.465906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.466022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.466047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.466248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.466275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.466408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.466433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.466549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.466574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.466771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.466798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.466992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.467018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.467189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.467217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.467415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.467440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.467582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.467607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.467727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.467777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.467955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.467981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.468126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.468151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.468281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.468305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.468483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.468508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.468650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.468674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.468801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.468825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.468971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.468998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.469147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.469172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.469315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.469343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.469510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.469535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.469683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.928 [2024-07-15 10:38:29.469707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.928 qpair failed and we were unable to recover it. 00:25:34.928 [2024-07-15 10:38:29.469866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.469902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.470069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.470098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.470293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.470319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.470496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.470523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.470683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.470711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.470872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.470909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.471047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.471075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.471218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.471245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.471378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.471402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.471551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.471576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.471703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.471728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.471890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.471916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.472059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.472084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.472253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.472280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.472444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.472469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.472596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.472637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.472834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.472861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.473012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.473037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.473160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.473184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.473375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.473420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.473590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.473616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.473778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.473806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.473949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.473978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.474190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.474215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.474383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.474411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.474619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.474673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.474825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.474851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.474987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.475012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.475179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.475207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.475378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.475402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.475520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.475560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.475719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.475746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.475916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.475955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.476107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.476134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.476317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.476344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.476493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.476519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.476668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.476711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.476937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.476977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.477147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.477173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.477346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.477375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.477612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.477664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.477860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.477892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.478053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.478078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.478245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.478273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.929 [2024-07-15 10:38:29.478415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.929 [2024-07-15 10:38:29.478439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.929 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.478587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.478629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.478826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.478853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.479010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.479035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.479153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.479195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.479328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.479357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.479531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.479556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.479678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.479703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.479907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.479938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.480117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.480143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.480311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.480339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.480552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.480602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.480778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.480804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.480978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.481009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.481175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.481203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.481371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.481400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.481576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.481600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.481774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.481803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.482007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.482042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.482185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.482212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.482347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.482375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.482533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.482557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.482680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.482722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.482861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.482897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.483043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.483069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.483183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.483207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.483351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.483376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.483538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.483563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.483758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.483786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.483968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.483995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.484122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.484147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.484292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.484316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.484446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.484474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.484673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.484698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.484856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.484889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.485057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.485083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.485235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.485260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.485422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.485450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.485580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.485607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.485748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.485774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.485928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.485971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.486117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.486146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.486320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.486346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.486488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.486515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.486701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.930 [2024-07-15 10:38:29.486729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.930 qpair failed and we were unable to recover it. 00:25:34.930 [2024-07-15 10:38:29.486901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.486927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.487121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.487149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.487304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.487332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.487501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.487526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.487692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.487720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.487882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.487926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.488079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.488104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.488305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.488333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.488597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.488649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.488821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.488846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.489019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.489049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.489245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.489273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.489444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.489469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.489621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.489646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.489759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.489784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.489931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.489957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.490161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.490188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.490345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.490372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.490541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.490566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.490714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.490756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.490919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.490948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.491060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.491085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.491260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.491285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.491421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.491449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.491627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.491653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.491823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.491850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.492046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.492085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.492210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.492237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.492403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.492430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.492705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.492756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.492927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.931 [2024-07-15 10:38:29.492953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.931 qpair failed and we were unable to recover it. 00:25:34.931 [2024-07-15 10:38:29.493115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.493143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.493359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.493410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.493568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.493592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.493712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.493752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.493909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.493953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.494136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.494162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.494353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.494387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.494530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.494559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.494705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.494730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.494901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.494934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.495112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.495140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.495304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.495330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.495457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.495482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.495631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.495655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.495809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.495834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.495993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.496019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.496187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.496215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.496383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.496407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.496526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.496568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.496732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.496759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.496949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.496977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.497190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.497218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.497385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.497413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.497586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.497611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.497738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.497762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.497899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.497939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.498073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.498099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.498290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.498317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.498506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.498554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.498752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.498778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.498912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.498941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.499086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.499113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.499276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.499301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.499467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.499495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.499705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.499759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.499943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.499969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.932 [2024-07-15 10:38:29.500091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.932 [2024-07-15 10:38:29.500116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.932 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.500287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.500314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.500482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.500507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.500670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.500696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.500850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.500888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.501062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.501087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.501207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.501232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.501358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.501382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.501529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.501554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.501712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.501736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.501897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.501927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.502081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.502107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.502282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.502309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.502435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.502462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.502596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.502621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.502771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.502796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.502949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.502975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.503128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.503152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.503293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.503321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.503451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.503478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.503653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.503678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.503821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.503846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.504001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.504027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.504179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.504203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.504328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.504357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.504483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.504508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.504653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.504678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.504790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.504832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.505010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.505036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.505209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.505233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.505360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.505385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.505542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.505566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.505682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.505706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.505901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.505938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.506101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.506128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.506284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.506309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.506447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.506488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.506620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.506647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.506795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.506821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:34.933 [2024-07-15 10:38:29.506974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.933 [2024-07-15 10:38:29.507000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:34.933 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.507145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.507188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.507355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.507381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.507582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.507610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.507775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.507802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.507972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.507997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.508129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.508154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.508344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.508372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.508504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.508529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.508717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.508744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.508872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.508907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.509052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.509078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.509228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.509253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.509371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.509396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.509523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.509548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.509665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.509689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.509874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.509945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.510107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.510134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.510263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.510289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.510461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.510486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.510637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.510662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.510787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.510829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.511013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.511041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.511195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.511220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.511378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.511406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.511570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.511598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.221 qpair failed and we were unable to recover it. 00:25:35.221 [2024-07-15 10:38:29.511769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.221 [2024-07-15 10:38:29.511794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.511936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.511962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.512107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.512132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.512286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.512311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.512430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.512455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.512629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.512670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.512810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.512834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.513009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.513035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.513194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.513222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.513390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.513414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.513577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.513605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.513763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.513790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.513957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.513983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.514124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.514167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.514332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.514360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.514505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.514531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.514646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.514671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.514883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.514911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.515107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.515132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.515273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.515301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.515440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.515468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.515665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.515689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.515844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.515869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.516005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.516030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.516180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.516205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.516325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.516366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.516485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.516512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.516678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.516707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.516884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.516936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.517090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.517115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.517250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.517275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.517415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.517458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.517665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.517711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.517889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.517915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.518069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.518094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.518239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.518267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.518397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.518421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.518592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.518636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.518788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.518815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.518998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.519023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.519171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.222 [2024-07-15 10:38:29.519195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.222 qpair failed and we were unable to recover it. 00:25:35.222 [2024-07-15 10:38:29.519344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.519370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.519514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.519539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.519704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.519732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.519890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.519939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.520084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.520109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.520231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.520273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.520433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.520500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.520676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.520700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.520828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.520853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.521036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.521076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.521201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.521227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.521354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.521395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.521551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.521579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.521738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.521772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.521930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.521956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.522089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.522115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.522263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.522288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.522441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.522484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.522616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.522643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.522820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.522845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.523001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.523027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.523168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.523196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.523340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.523366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.523521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.523546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.523692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.523717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.523892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.523917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.524067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.524092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.524296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.524323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.524494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.524519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.524684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.524712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.524865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.524918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.525116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.525143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.525286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.525314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.525481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.525508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.525651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.525676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.525855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.525907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.526053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.526079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.526270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.526295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.526430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.526458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.526629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.526653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.526848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.526896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.223 qpair failed and we were unable to recover it. 00:25:35.223 [2024-07-15 10:38:29.527036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.223 [2024-07-15 10:38:29.527061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.527240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.527268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.527411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.527436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.527589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.527614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.527818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.527843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.528023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.528048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.528169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.528210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.528379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.528408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.528559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.528584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.528735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.528760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.528904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.528943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.529127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.529153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.529294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.529323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.529457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.529484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.529662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.529687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.529811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.529835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.529963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.529990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.530132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.530157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.530282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.530326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.530520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.530545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.530693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.530718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.530870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.530900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.531023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.531049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.531203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.531228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.531421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.531449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.531636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.531677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.531839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.531867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.532068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.532093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.532253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.532281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.532451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.532476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.532600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.532642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.532806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.532833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.532984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.533010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.533185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.533209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.533352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.533393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.533554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.533580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.533782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.533809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.534004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.534033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.534199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.534224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.534378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.534420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.534568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.534593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.224 qpair failed and we were unable to recover it. 00:25:35.224 [2024-07-15 10:38:29.534772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.224 [2024-07-15 10:38:29.534796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.534924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.534949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.535094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.535119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.535239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.535264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.535371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.535396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.535585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.535612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.535803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.535827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.535952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.535978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.536121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.536146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.536326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.536350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.536517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.536544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.536734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.536761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.536940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.536966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.537128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.537156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.537310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.537337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.537483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.537507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.537661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.537704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.537860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.537896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.538061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.538085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.538278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.538306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.538558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.538609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.538797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.538824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.538975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.539000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.539147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.539172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.539321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.539346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.539478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.539503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.539646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.539671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.539827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.539852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.540009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.540036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.540197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.540224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.540396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.540420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.540594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.540618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.540759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.540787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.225 qpair failed and we were unable to recover it. 00:25:35.225 [2024-07-15 10:38:29.540931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.225 [2024-07-15 10:38:29.540957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.541085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.541111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.541287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.541316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.541486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.541510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.541676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.541704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.541836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.541865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.542045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.542071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.542222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.542247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.542372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.542398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.542573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.542598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.542803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.542830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.543053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.543078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.543232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.543257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.543456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.543484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.543614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.543641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.543805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.543830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.543959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.543984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.544155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.544195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.544363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.544388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.544586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.544614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.544753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.544780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.544946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.544971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.545165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.545193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.545329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.545359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.545554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.545578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.545704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.545729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.545873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.545906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.546059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.546084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.546281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.546309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.546449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.546477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.546642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.546668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.546781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.546820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.546965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.546998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.547170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.547195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.547364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.547391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.547520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.547549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.547744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.547769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.547904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.547932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.548065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.548092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.548277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.548302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.548483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.226 [2024-07-15 10:38:29.548511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.226 qpair failed and we were unable to recover it. 00:25:35.226 [2024-07-15 10:38:29.548634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.548662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.548853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.548895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.549019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.549044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.549209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.549236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.549383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.549408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.549559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.549602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.549794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.549822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.549987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.550012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.550172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.550215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.550385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.550412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.550576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.550602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.550721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.550762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.550921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.550950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.551147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.551171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.551304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.551331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.551496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.551523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.551692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.551717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.551873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.551906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.552071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.552098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.552266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.552290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.552462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.552486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.552656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.552685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.552860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.552902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.553021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.553046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.553248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.553274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.553404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.553428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.553550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.553576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.553752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.553779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.553919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.553944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.554098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.554123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.554266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.554294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.554460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.554489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.554655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.554682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.554873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.554907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.555080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.555105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.555225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.555266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.555455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.555482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.555649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.555673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.555804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.555848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.555989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.227 [2024-07-15 10:38:29.556014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.227 qpair failed and we were unable to recover it. 00:25:35.227 [2024-07-15 10:38:29.556159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.556184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.556336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.556364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.556533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.556558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.556736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.556761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.556902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.556930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.557065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.557095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.557265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.557290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.557484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.557511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.557672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.557700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.557870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.557911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.558072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.558100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.558224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.558252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.558424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.558449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.558598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.558623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.558778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.558805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.558957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.558983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.559133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.559159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.559305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.559335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.559511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.559536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.559662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.559704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.559866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.559901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.560037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.560061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.560211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.560236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.560355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.560380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.560556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.560581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.560720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.560749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.560929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.560955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.561074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.561099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.561214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.561239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.561384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.561412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.561610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.561634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.561800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.561834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.562045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.562070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.562219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.562244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.562386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.562413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.562574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.562602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.562769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.562793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.562987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.563016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.563200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.563228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.563371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.563397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.563544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.228 [2024-07-15 10:38:29.563587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.228 qpair failed and we were unable to recover it. 00:25:35.228 [2024-07-15 10:38:29.563756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.563783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.563956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.563982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.564150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.564179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.564353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.564378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.564535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.564561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.564759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.564788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.564953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.564981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.565143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.565168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.565307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.565335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.565489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.565516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.565686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.565712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.565911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.565940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.566107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.566134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.566301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.566325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.566493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.566522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.566712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.566739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.566871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.566904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.567092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.567120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.567276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.567304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.567464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.567489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.567651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.567678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.567861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.567897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.568049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.568074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.568187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.568212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.568389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.568416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.568578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.568602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.568836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.568864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.569060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.569088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.569263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.569287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.569414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.569439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.569583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.569613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.569765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.569793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.569966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.569991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.570113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.570138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.570291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.570316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.229 [2024-07-15 10:38:29.570463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.229 [2024-07-15 10:38:29.570504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.229 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.570693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.570720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.570855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.570886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.571042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.571067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.571219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.571261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.571463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.571487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.571657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.571684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.571850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.571884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.572053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.572078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.572276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.572303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.572438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.572466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.572633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.572657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.572801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.572843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.573048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.573074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.573221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.573246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.573418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.573445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.573606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.573633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.573796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.573820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.573938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.573979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.574136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.574163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.574334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.574359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.574476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.574501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.574678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.574703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.574821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.574845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.575021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.575050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.575212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.575241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.575434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.575459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.575651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.575678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.575802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.575830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.575998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.576024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.576193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.576220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.576375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.576402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.576575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.576600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.576764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.576791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.576958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.576987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.577136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.577165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.577356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.577383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.577504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.577531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.577697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.577725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.577933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.577959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.230 qpair failed and we were unable to recover it. 00:25:35.230 [2024-07-15 10:38:29.578106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.230 [2024-07-15 10:38:29.578130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.578278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.578303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.578444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.578469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.578678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.578703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.578885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.578911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.579050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.579078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.579280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.579308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.579447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.579472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.579619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.579643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.579838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.579865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.580010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.580036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.580167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.580192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.580401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.580426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.580549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.580574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.580744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.580769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.580917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.580947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.581111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.581137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.581293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.581318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.581456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.581481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.581636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.581660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.581850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.581892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.582058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.582086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.582257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.582282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.582404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.582430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.582603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.582631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.582807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.582832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.582984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.583010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.583124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.583149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.583273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.583298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.583412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.583436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.583609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.583637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.583804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.583831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.584034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.584060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.584220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.584247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.584423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.584449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.584595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.584627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.584789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.584817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.584958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.584984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.585161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.585204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.585335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.585363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.585497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.585521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.231 qpair failed and we were unable to recover it. 00:25:35.231 [2024-07-15 10:38:29.585650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.231 [2024-07-15 10:38:29.585674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.585822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.585847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.585970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.585995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.586109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.586135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.586284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.586311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.586474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.586499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.586655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.586683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.586813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.586840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.587029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.587054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.587191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.587219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.587377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.587404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.587578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.587603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.587769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.587797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.587984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.588012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.588159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.588185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.588374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.588401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.588601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.588628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.588826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.588850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.589033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.589058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.589221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.589248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.589386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.589411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.589567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.589608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.589789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.589813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.589959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.589985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.590153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.590181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.590370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.590398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.590563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.590588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.590778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.590805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.590962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.590991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.591162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.591187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.591336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.591361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.591526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.591554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.591743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.591770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.591951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.591976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.592122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.592151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.592324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.592349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.592521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.592549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.592683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.592711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.592881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.592907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.593054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.593079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.593247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.232 [2024-07-15 10:38:29.593276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.232 qpair failed and we were unable to recover it. 00:25:35.232 [2024-07-15 10:38:29.593443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.593468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.593598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.593623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.593802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.593830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.593989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.594015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.594161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.594201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.594373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.594401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.594594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.594619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.594786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.594814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.594979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.595007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.595178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.595203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.595317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.595358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.595519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.595546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.595742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.595767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.595923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.595952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.596114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.596142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.596338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.596363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.596538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.596566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.596764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.596791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.596937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.596962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.597111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.597136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.597338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.597366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.597543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.597567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.597716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.597744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.597883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.597928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.598077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.598101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.598224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.598267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.598452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.598480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.598685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.598710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.598845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.598873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.599071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.599099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.599274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.599300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.599463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.599491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.599661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.599688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.599854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.599891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.600050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.600075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.600226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.600251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.600423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.600448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.600656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.600683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.600842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.600870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.601047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.601073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.601208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.233 [2024-07-15 10:38:29.601236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.233 qpair failed and we were unable to recover it. 00:25:35.233 [2024-07-15 10:38:29.601402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.601430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.601596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.601621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.601781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.601808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.601997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.602025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.602167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.602192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.602307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.602331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.602481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.602510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.602674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.602698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.602851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.602882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.603005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.603031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.603211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.603235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.603397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.603425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.603587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.603614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.603780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.603805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.603971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.604000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.604154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.604182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.604357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.604381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.604526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.604551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.604698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.604727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.604897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.604923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.605067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.605092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.605269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.605297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.605442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.605468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.605589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.605614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.605753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.605781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.605976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.606002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.606160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.606187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.606381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.606409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.606604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.606629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.606793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.606820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.606966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.606991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.607138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.607164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.607359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.607392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.607547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.607575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.607756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.234 [2024-07-15 10:38:29.607784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.234 qpair failed and we were unable to recover it. 00:25:35.234 [2024-07-15 10:38:29.607911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.607952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.608102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.608126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.608309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.608333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.608477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.608504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.608639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.608667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.608836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.608861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.608992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.609017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.609208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.609236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.609402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.609426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.609587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.609615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.609752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.609780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.609927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.609953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.610092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.610133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.610265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.610292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.610439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.610464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.610613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.610638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.610767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.610794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.610990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.611016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.611148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.611175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.611333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.611361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.611527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.611552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.611743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.611771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.611936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.611964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.612121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.612146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.612269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.612294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.612414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.612439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.612564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.612590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.612715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.612756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.612935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.612960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.613108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.613133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.613337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.613364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.613566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.613590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.613764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.613789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.613967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.613996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.614131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.614159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.614332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.614357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.614487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.614528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.614658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.614690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.614850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.614881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.615008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.615050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.615207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.235 [2024-07-15 10:38:29.615234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.235 qpair failed and we were unable to recover it. 00:25:35.235 [2024-07-15 10:38:29.615408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.615433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.615575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.615615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.615779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.615806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.615999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.616025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.616189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.616215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.616346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.616374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.616533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.616559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.616725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.616753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.616915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.616944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.617129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.617154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.617352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.617379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.617540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.617567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.617721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.617746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.617911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.617940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.618128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.618156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.618294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.618319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.618473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.618515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.618672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.618700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.618861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.618892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.619063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.619090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.619224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.619252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.619431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.619455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.619650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.619678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.619821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.619850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.620061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.620086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.620266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.620294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.620462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.620489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.620652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.620677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.620867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.620902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.621062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.621087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.621264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.621289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.621479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.621506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.621639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.621667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.621828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.621857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.622012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.622037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.622162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.622188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.622331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.622360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.622478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.622520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.622647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.622675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.622874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.622907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.236 [2024-07-15 10:38:29.623081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.236 [2024-07-15 10:38:29.623108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.236 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.623243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.623271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.623440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.623465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.623623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.623651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.623808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.623835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.623976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.624001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.624151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.624193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.624393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.624418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.624563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.624588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.624704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.624745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.624930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.624955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.625129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.625154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.625296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.625324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.625471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.625496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.625642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.625668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.625863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.625897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.626032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.626059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.626195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.626222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.626372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.626413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.626605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.626630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.626780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.626806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.626922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.626947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.627096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.627121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.627291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.627317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.627493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.627536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.627727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.627755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.627910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.627952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.628105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.628131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.628303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.628332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.628528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.628553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.628695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.628723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.628853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.628908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.629111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.629136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.629277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.629305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.629467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.629495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.629661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.629687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.629856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.629897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.630062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.630090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.630224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.630249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.630372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.630397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.630587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.237 [2024-07-15 10:38:29.630614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.237 qpair failed and we were unable to recover it. 00:25:35.237 [2024-07-15 10:38:29.630789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.630814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.630964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.630990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.631187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.631215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.631379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.631404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.631527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.631567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.631729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.631757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.631948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.631974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.632124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.632149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.632292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.632319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.632517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.632543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.632716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.632743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.632944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.632969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.633117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.633142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.633265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.633308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.633478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.633503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.633654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.633679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.633820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.633849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.634057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.634083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.634230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.634255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.634405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.634430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.634583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.634626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.634820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.634845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.635022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.635051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.635213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.635241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.635396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.635421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.635551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.635575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.635696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.635721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.635899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.635925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.636039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.636064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.636212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.636237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.636383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.636408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.636523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.636548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.636700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.636725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.636909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.636935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.637063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.238 [2024-07-15 10:38:29.637108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.238 qpair failed and we were unable to recover it. 00:25:35.238 [2024-07-15 10:38:29.637266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.637298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.637463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.637488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.637640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.637665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.637843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.637870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.638040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.638065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.638194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.638220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.638335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.638360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.638531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.638556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.638725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.638752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.638887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.638915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.639053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.639077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.639197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.639223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.639386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.639413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.639555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.639580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.639729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.639754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.639926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.639952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.640133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.640158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.640327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.640355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.640530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.640556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.640738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.640763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.640914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.640940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.641088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.641130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.641267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.641292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.641444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.641486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.641620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.641648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.641816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.641841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.641960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.641985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.642131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.642164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.642330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.642355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.642545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.642573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.642727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.642754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.642909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.642935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.643082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.643121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.643289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.643314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.643462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.643488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.643656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.643684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.643889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.643915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.644089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.644113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.644251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.644278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.644440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.644468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.239 [2024-07-15 10:38:29.644602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.239 [2024-07-15 10:38:29.644628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.239 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.644838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.644866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.645044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.645073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.645266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.645291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.645484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.645512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.645670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.645698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.645870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.645901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.646050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.646075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.646214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.646241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.646413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.646437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.646582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.646608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.646758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.646786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.646958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.646983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.647132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.647157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.647334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.647361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.647563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.647588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.647754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.647781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.647977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.648006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.648144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.648169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.648285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.648325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.648501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.648527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.648655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.648680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.648848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.648895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.649044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.649069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.649220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.649246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.649415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.649443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.649564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.649591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.649784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.649813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.649959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.649988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.650155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.650184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.650355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.650380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.650570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.650597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.650738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.650765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.650929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.650954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.651118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.651145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.651307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.651335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.651478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.651502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.651647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.651672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.651874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.651910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.652053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.652078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.652266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.652293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.240 [2024-07-15 10:38:29.652491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.240 [2024-07-15 10:38:29.652519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.240 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.652713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.652738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.652918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.652946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.653105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.653132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.653286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.653311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.653463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.653487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.653659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.653686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.653841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.653868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.654035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.654060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.654260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.654287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.654432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.654456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.654581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.654606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.654826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.654853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.655070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.655095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.655262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.655289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.655464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.655490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.655608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.655633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.655780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.655821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.655995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.656020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.656163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.656188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.656343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.656369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.656513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.656537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.656693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.656717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.656915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.656944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.657114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.657141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.657286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.657311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.657424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.657453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.657643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.657671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.657832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.657857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.658036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.658065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.658204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.658232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.658429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.658454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.658625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.658652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.658838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.658865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.659047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.659072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.659201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.659225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.659350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.659375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.659520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.659545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.659664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.659688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.659891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.659919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.660111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.660136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.241 qpair failed and we were unable to recover it. 00:25:35.241 [2024-07-15 10:38:29.660332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.241 [2024-07-15 10:38:29.660360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.660523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.660550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.660707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.660734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.660901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.660943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.661097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.661122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.661335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.661360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.661560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.661587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.661745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.661772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.661948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.661974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.662121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.662162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.662307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.662335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.662476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.662502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.662661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.662686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.662883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.662911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.663045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.663069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.663194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.663219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.663386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.663414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.663579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.663604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.663718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.663743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.663893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.663920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.664039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.664064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.664231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.664259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.664417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.664447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.664617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.664642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.664841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.664868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.665022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.665054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.665225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.665250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.665399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.665442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.665602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.665629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.665776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.665801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.665953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.665979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.666127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.666153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.666299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.666325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.666499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.666526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.666690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.242 [2024-07-15 10:38:29.666717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.242 qpair failed and we were unable to recover it. 00:25:35.242 [2024-07-15 10:38:29.666922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.666948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.667118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.667146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.667318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.667343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.667528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.667552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.667688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.667715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.667852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.667886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.668078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.668103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.668270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.668299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.668458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.668486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.668649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.668673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.668824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.668866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.669047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.669072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.669224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.669248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.669364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.669405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.669569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.669597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.669789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.669813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.669992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.670020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.670187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.670215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.670351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.670376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.670502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.670526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.670699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.670727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.670922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.670947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.671085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.671113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.671268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.671296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.671488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.671513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.671648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.671677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.671864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.671898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.672106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.672131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.672296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.672325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.672451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.672479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.672673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.672702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.672895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.672923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.673057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.673084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.673223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.673249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.673405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.673433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.673586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.673613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.673772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.673801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.673967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.673993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.674156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.674183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.674345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.674370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.243 [2024-07-15 10:38:29.674538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.243 [2024-07-15 10:38:29.674565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.243 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.674733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.674760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.674929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.674954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.675101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.675144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.675311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.675339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.675508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.675532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.675649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.675674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.675838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.675862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.676016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.676041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.676207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.676236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.676394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.676422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.676563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.676587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.676735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.676780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.676949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.676978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.677142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.677167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.677356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.677383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.677579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.677607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.677806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.677831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.677972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.678001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.678168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.678196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.678355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.678380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.678508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.678533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.678689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.678717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.678889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.678914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.679079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.679107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.679243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.679271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.679440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.679465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.679579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.679619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.679753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.679782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.679956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.679981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.680143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.680175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.680341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.680368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.680540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.680564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.680724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.680751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.680929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.680954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.681137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.681162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.681311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.681339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.681495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.681522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.681705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.681730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.681863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.681907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.682061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.682089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.244 qpair failed and we were unable to recover it. 00:25:35.244 [2024-07-15 10:38:29.682277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.244 [2024-07-15 10:38:29.682301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.682461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.682488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.682620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.682647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.682786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.682812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.682962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.683005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.683206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.683234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.683368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.683392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.683545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.683586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.683781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.683809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.683976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.684001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.684135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.684162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.684322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.684350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.684518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.684543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.684665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.684690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.684829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.684854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.685034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.685059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.685200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.685229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.685389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.685417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.685583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.685609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.685804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.685832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.686012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.686038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.686184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.686210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.686327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.686369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.686523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.686550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.686714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.686739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.686903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.686931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.687090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.687118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.687310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.687335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.687528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.687556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.687710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.687742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.687885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.687910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.688055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.688079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.688291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.688319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.688483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.688508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.688700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.688727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.688855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.688890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.689081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.689105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.689271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.689298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.689458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.689486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.689672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.689700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.689861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.689904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.245 qpair failed and we were unable to recover it. 00:25:35.245 [2024-07-15 10:38:29.690049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.245 [2024-07-15 10:38:29.690074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.690222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.690248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.690372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.690398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.690543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.690568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.690745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.690770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.690969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.690998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.691164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.691194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.691366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.691391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.691510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.691552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.691677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.691705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.691846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.691871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.692029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.692076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.692241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.692270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.692408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.692434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.692591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.692631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.692773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.692802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.692969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.692995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.693121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.693147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.693345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.693372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.693518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.693543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.693670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.693694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.693872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.693919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.694083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.694108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.694229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.694270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.694462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.694490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.694656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.694681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.694801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.694825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.694999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.695025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.695195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.695224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.695358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.695386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.695546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.695574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.695723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.695748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.695897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.695923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.696091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.696119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.696310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.696335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.696513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.696541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.696737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.696764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.696910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.696936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.697061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.697086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.697288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.697315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.697448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.246 [2024-07-15 10:38:29.697473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.246 qpair failed and we were unable to recover it. 00:25:35.246 [2024-07-15 10:38:29.697663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.697691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.697825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.697853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.697997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.698024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.698199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.698225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.698377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.698401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.698548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.698572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.698695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.698736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.698862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.698898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.699042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.699067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.699214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.699239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.699404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.699433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.699610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.699636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.699788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.699812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.700003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.700032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.700208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.700233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.700353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.700378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.700570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.700597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.700768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.700793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.700913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.700938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.701111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.701135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.701347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.701372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.701545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.701573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.701734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.701761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.701905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.701931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.702105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.702145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.702311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.702338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.702486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.702512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.702661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.702689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.702845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.702872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.703030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.703056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.703184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.703209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.703366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.703392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.703565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.703590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.247 [2024-07-15 10:38:29.703759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.247 [2024-07-15 10:38:29.703786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.247 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.703957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.703983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.704158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.704183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.704344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.704372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.704573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.704601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.704752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.704777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.704948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.704973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.705124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.705150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.705349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.705373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.705518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.705545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.705731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.705758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.705893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.705918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.706064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.706090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.706262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.706289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.706455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.706479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.706672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.706700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.706861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.706894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.707058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.707083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.707236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.707261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.707411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.707435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.707557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.707583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.707755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.707783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.707966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.707995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.708164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.708189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.708358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.708385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.708549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.708576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.708741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.708766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.708891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.708916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.709040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.709065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.709215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.709241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.709365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.709390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.709538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.709563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.709711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.709736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.709890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.709919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.710088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.710120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.710292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.710318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.710441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.710484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.710648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.710676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.710813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.710842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.711006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.711031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.248 [2024-07-15 10:38:29.711204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.248 [2024-07-15 10:38:29.711246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.248 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.711410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.711437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.711628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.711655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.711808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.711836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.712009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.712035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.712194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.712221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.712481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.712532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.712707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.712732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.712888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.712914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.713086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.713112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.713314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.713339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.713492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.713517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.713680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.713708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.713904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.713933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.714179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.714224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.714386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.714410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.714573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.714602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.714802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.714827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.714979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.715005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.715179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.715205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.715374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.715401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.715528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.715556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.715693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.715720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.715864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.715896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.716027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.716069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.716207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.716234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.716396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.716425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.716620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.716645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.716839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.716868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.717035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.717062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.717189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.717214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.717392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.717417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.717612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.717640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.717799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.717826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.718111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.718172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.718346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.718371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.718501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.718545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.718730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.718758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.718954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.719019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.719227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.719252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.719422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.719450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.249 [2024-07-15 10:38:29.719582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.249 [2024-07-15 10:38:29.719610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.249 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.719755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.719783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.719987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.720013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.720166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.720194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.720388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.720416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.720665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.720712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.720885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.720911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.721071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.721099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.721255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.721283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.721433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.721460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.721665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.721690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.721834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.721874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.722059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.722092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.722348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.722398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.722583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.722608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.722773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.722801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.722964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.722994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.723184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.723249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.723429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.723455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.723592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.723620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.723802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.723827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.723947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.723973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.724093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.724118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.724241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.724266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.724408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.724437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.724568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.724596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.724758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.724787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.724943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.724969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.725089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.725113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.725263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.725288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.725412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.725437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.725603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.725632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.725792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.725820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.725952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.725985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.726160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.726185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.726356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.726385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.726513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.726541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.726736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.726763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.726936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.726962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.727139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.727169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.250 [2024-07-15 10:38:29.727303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.250 [2024-07-15 10:38:29.727332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.250 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.727504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.727531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.727727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.727751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.727898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.727927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.728061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.728091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.728281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.728308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.728505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.728530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.728695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.728723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.728911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.728948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.729155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.729206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.729359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.729384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.729572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.729599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.729759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.729788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.730018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.730076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.730245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.730270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.730434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.730463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.730618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.730645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.730803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.730831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.731047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.731073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.731216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.731245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.731414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.731442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.731606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.731634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.731798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.731825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.731984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.732018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.732143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.732185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.732323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.732350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.732516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.732541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.732705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.732733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.732895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.732945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.733081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.733106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.733257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.733283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.733458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.733487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.733648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.733677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.733844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.733883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.734102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.734126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.251 [2024-07-15 10:38:29.734300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.251 [2024-07-15 10:38:29.734327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.251 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.734458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.734487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.734655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.734683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.734849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.734873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.735042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.735070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.735232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.735259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.735522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.735572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.735745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.735772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.735934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.735963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.736138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.736163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.736353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.736380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.736520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.736544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.736698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.736740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.736916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.736942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.737108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.737152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.737329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.737354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.737511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.737536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.737683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.737711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.737900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.737928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.738071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.738095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.738256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.738283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.738439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.738466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.738651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.738676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.738795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.738820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.738937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.738962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.739159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.739191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.739371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.739396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.739546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.739570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.739719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.739761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.739923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.739954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.740189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.740243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.740415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.740441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.740611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.740638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.740802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.740830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.741001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.741030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.741181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.741206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.741377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.741404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.741533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.741561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.741698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.741726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.741903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.741929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.742052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.742077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.252 qpair failed and we were unable to recover it. 00:25:35.252 [2024-07-15 10:38:29.742244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.252 [2024-07-15 10:38:29.742271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.742469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.742496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.742664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.742688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.742815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.742856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.743040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.743068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.743346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.743408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.743582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.743608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.743736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.743779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.743938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.743966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.744161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.744189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.744332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.744357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.744493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.744518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.744679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.744703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.744850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.744884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.745053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.745078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.745249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.745276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.745441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.745470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.745603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.745631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.745824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.745848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.746008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.746035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.746203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.746230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.746472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.746519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.746696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.746721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.746869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.746918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.747081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.747116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.747307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.747366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.747545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.747570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.747693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.747718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.747841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.747866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.748050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.748075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.748249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.748274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.748389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.748429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.748606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.748631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.748754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.748778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.748926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.748952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.749121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.749149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.749278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.749306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.749475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.749499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.749689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.749714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.749856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.749892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.750068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.750096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.253 qpair failed and we were unable to recover it. 00:25:35.253 [2024-07-15 10:38:29.750302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.253 [2024-07-15 10:38:29.750352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.750531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.750556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.750698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.750726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.750905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.750938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.751202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.751261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.751425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.751450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.751596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.751622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.751781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.751808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.751999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.752027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.752174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.752199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.752328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.752354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.752534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.752561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.752749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.752776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.752943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.752969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.753116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.753142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.753315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.753343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.753509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.753537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.753734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.753760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.753962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.753990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.754181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.754208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.754392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.754419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.754561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.754587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.754705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.754730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.754901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.754942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.755183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.755236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.755406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.755431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.755627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.755654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.755796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.755825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.755975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.756010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.756209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.756234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.756378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.756406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.756580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.756605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.756731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.756757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.756907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.756943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.757089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.757118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.757313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.757341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.757504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.757533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.757703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.757729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.757903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.757932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.758096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.758124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.758309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.254 [2024-07-15 10:38:29.758358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.254 qpair failed and we were unable to recover it. 00:25:35.254 [2024-07-15 10:38:29.758561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.758585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.758734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.758762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.758947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.758976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.759277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.759339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.759535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.759560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.759750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.759777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.759923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.759951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.760111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.760139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.760312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.760337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.760534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.760562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.760721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.760749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.760950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.760975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.761107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.761133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.761277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.761302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.761479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.761507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.761648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.761676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.761840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.761865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.762005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.762029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.762180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.762205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.762478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.762529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.762694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.762722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.762867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.762921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.763088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.763117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.763386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.763438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.763629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.763654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.763825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.763853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.764007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.764033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.764187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.764211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.764334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.764359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.764535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.764576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.764740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.764767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.765039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.765089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.765238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.765263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.765408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.765449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.765583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.255 [2024-07-15 10:38:29.765612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.255 qpair failed and we were unable to recover it. 00:25:35.255 [2024-07-15 10:38:29.765775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.765802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.765987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.766013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.766157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.766183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.766350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.766377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.766606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.766654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.766837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.766862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.767063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.767091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.767253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.767280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.767441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.767468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.767604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.767628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.767755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.767780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.767951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.767979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.768156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.768180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.768329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.768353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.768517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.768544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.768718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.768744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.768924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.768951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.769122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.769149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.769343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.769371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.769525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.769552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.769748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.769776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.769927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.769952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.770098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.770123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.770296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.770323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.770562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.770613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.770804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.770829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.771003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.771032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.771189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.771221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.771412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.771460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.771628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.771653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.771825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.771852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.772025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.772050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.772214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.772241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.772376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.772400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.772590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.772618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.772780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.772807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.772970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.772996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.773128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.773153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.773299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.773324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.773468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.773492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.773666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.256 [2024-07-15 10:38:29.773694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.256 qpair failed and we were unable to recover it. 00:25:35.256 [2024-07-15 10:38:29.773844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.773869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.774023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.774048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.774171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.774196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.774370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.774395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.774517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.774543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.774657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.774682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.774905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.774934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.775071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.775098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.775270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.775294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.775416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.775441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.775595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.775624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.775811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.775839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.776050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.776076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.776222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.776250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.776383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.776411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.776602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.776630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.776806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.776831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.776978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.777021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.777176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.777205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.777439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.777493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.777668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.777693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.777860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.777898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.778061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.778088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.778241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.778266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.778418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.778444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.778610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.778637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.778759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.778791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.778960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.778989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.779159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.779184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.779375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.779403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.779534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.779562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.779703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.779731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.779861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.779893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.780047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.780072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.780194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.780219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.780391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.780418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.780586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.780611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.780826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.780853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.781011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.781037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.781185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.781209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.257 [2024-07-15 10:38:29.781370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.257 [2024-07-15 10:38:29.781394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.257 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.781565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.781590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.781768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.781796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.781961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.781989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.782134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.782159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.782283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.782326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.782490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.782517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.782675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.782704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.782870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.782904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.783065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.783092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.783278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.783306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.783471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.783500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.783670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.783695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.783867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.783905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.784046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.784074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.784276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.784302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.784478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.784503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.784641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.784668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.784831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.784858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.785036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.785064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.785238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.785264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.785422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.785450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.785613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.785641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.785766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.785794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.785974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.785999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.786171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.786196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.786373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.786408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.786572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.786600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.786761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.786787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.786956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.786985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.787183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.787208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.787358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.787384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.787558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.787583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.787752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.787780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.787909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.787937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.788101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.788128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.788324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.788349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.788516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.788544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.788707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.788735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.788913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.788939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.789121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.258 [2024-07-15 10:38:29.789146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.258 qpair failed and we were unable to recover it. 00:25:35.258 [2024-07-15 10:38:29.789318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.789346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.789505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.789533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.789728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.789757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.789929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.789955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.790077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.790102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.790259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.790286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.790447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.790475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.790645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.790670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.790836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.790864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.791049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.791074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.791247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.791272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.791417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.791441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.791569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.791612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.791777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.791805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.791967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.791995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.792167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.792193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.792370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.792399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.792590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.792618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.792748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.792777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.792938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.792964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.793088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.793113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.793290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.793319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.793505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.793533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.793674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.793699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.793821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.793846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.794016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.794046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.794166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.794208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.794376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.794401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.794524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.794549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.794700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.794724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.794901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.794927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.795077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.795102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.795262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.795289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.795458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.795485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.795693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.795755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.795906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.795932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.259 [2024-07-15 10:38:29.796082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-07-15 10:38:29.796106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.259 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.796252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.796277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.796590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.796644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.796819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.796844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.796979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.797021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.797184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.797212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.797503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.797566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.797734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.797760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.797938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.797963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.798114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.798139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.798345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.798373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.798532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.798557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.798698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.798728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.798852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.798888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.799032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.799057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.799175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.799201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.799395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.799423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.799582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.799611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.799755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.799783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.799978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.800004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.800155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.800180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.800325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.800350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.800493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.800518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.800639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.800664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.800815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.260 [2024-07-15 10:38:29.800840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.260 qpair failed and we were unable to recover it. 00:25:35.260 [2024-07-15 10:38:29.800965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.800991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.801167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.801191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.801368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.801393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.801559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.801587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.801743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.801775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.801943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.801971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.802142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.802168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.802303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.802332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.802471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.802499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.802657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.802684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.802845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.802870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.803059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.803087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.803242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.803270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.803428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.803456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.803624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.803650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.803818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.803845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.804022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.804049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.804258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.804310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.804487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.804512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.804661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.804686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.804829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.804870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.805047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.805075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.805243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.805268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.805415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.805440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.805591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.805616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.805765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.805790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.805943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.805970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.806084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.806109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.806276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.806303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.806550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.806577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.806723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.806748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.806884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.806926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.807090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.807117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.807315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.807374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.807543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.807569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.807738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.807766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.807904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.261 [2024-07-15 10:38:29.807933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.261 qpair failed and we were unable to recover it. 00:25:35.261 [2024-07-15 10:38:29.808142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.808170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.808309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.808333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.808526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.808553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.808689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.808716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.808874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.808908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.809076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.809101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.809263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.809291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.809457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.809486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.809635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.809660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.809808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.809832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.810035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.810061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.810204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.810232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.810393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.810418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.810570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.810595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.810738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.810767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.810935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.810963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.811118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.811145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.811315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.811340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.811531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.811558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.811749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.811777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.811919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.811948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.812118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.812143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.812312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.812340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.812532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.812559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.812711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.812739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.812888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.812914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.813064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.813088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.813250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.813277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.813466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.813517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.813719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.813744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.813874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.813910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.814103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.814128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.814331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.814356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.814508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.814534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.814682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.814710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.814897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.814940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.815069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.815094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.815222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.815246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.815365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.815390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.815563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.262 [2024-07-15 10:38:29.815590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.262 qpair failed and we were unable to recover it. 00:25:35.262 [2024-07-15 10:38:29.815756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.815783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.815948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.815974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.816141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.816169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.816302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.816329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.816491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.816519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.816720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.816745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.816913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.816942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.817067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.817101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.817228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.817255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.817420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.817445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.817607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.817635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.817787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.817814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.818078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.818129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.818303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.818327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.818494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.818521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.818699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.818723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.818870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.818918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.819112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.819138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.819305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.819333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.819460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.819488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.819649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.819677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.819888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.819913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.820055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.820083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.820247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.820274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.820529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.820580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.820753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.820779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.820906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.820948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.821139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.821167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.821396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.821452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.821620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.821645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.821809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.821836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.822017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.822043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.822171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.822196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.822347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.822372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.822557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.822582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.822731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.822758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.822897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.822928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.263 [2024-07-15 10:38:29.823106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.263 [2024-07-15 10:38:29.823132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.263 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.823330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.823358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.823543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.823571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.823769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.823796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.823972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.823999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.824163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.824190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.824353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.824381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.824577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.824633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.824822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.824847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.825045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.825073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.825214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.825248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.825396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.825421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.825594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.825619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.825780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.825807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.825963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.825988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.826142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.826167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.826340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.826365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.826522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.826550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.826717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.826744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.826905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.826934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.827089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.827114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.827232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.827256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.827421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.827449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.827608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.827637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.827780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.827805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.827923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.827948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.828101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.828128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.828292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.828319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.828491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.828515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.828680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.828707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.828861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.264 [2024-07-15 10:38:29.828897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.264 qpair failed and we were unable to recover it. 00:25:35.264 [2024-07-15 10:38:29.829068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.829096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.829291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.829316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.829435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.829459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.829603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.829628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.829797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.829825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.829997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.830023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.830180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.830223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.830354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.830382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.830541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.830568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.830743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.830768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.830917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.830943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.831113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.831141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.831374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.831426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.831601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.831626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.831793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.831821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.832027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.832052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.832222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.832249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.832388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.832413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.832539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.832565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.832742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.832775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.832965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.832993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.833132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.833158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.833304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.833345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.265 [2024-07-15 10:38:29.833534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.265 [2024-07-15 10:38:29.833561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.265 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.833721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.833749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.833915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.833941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.834119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.834144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.834351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.834379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.834570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.834604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.834795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.834820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.834980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.835008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.835147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.835176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.835339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.835367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.835537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.835563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.835762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.835790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.835990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.836016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.836164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.836204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.836373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.836398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.836542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.836583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.836711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.836739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.836869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.836919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.837069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.837094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.837247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.837271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.837419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.837443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.837647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.837672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.837843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.837868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.838005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.838034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.838200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.838228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.838476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.838526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.838725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.838750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.838896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.838924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.839062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.839089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.839303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.839355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.839528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.839553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.839709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.839734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.839890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.839918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.840079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.840106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.840303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.840328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.840461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.840489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.840617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.840646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.840812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.840840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.841017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.841042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.841218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.266 [2024-07-15 10:38:29.841245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.266 qpair failed and we were unable to recover it. 00:25:35.266 [2024-07-15 10:38:29.841437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.267 [2024-07-15 10:38:29.841462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.267 qpair failed and we were unable to recover it. 00:25:35.267 [2024-07-15 10:38:29.841577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.267 [2024-07-15 10:38:29.841602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.267 qpair failed and we were unable to recover it. 00:25:35.267 [2024-07-15 10:38:29.841764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.267 [2024-07-15 10:38:29.841789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.267 qpair failed and we were unable to recover it. 00:25:35.267 [2024-07-15 10:38:29.841915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.267 [2024-07-15 10:38:29.841940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.267 qpair failed and we were unable to recover it. 00:25:35.267 [2024-07-15 10:38:29.842098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.267 [2024-07-15 10:38:29.842123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.267 qpair failed and we were unable to recover it. 00:25:35.267 [2024-07-15 10:38:29.842320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.267 [2024-07-15 10:38:29.842348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.267 qpair failed and we were unable to recover it. 00:25:35.267 [2024-07-15 10:38:29.842536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.267 [2024-07-15 10:38:29.842561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.267 qpair failed and we were unable to recover it. 00:25:35.267 [2024-07-15 10:38:29.842730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.267 [2024-07-15 10:38:29.842757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.267 qpair failed and we were unable to recover it. 00:25:35.267 [2024-07-15 10:38:29.842955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.267 [2024-07-15 10:38:29.842981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.267 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.843163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.843188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.843396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.843421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.843593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.843621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.843782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.843811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.843994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.844019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.844161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.844186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.844313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.844355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.844522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.844549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.844714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.844741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.844887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.844913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.845035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.845062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.845206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.845231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.845416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.845440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.845565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.845589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.845727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.553 [2024-07-15 10:38:29.845775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.553 qpair failed and we were unable to recover it. 00:25:35.553 [2024-07-15 10:38:29.845915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.845943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.846089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.846117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.846255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.846280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.846394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.846420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.846600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.846627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.846802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.846826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.846957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.846983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.847111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.847152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.847294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.847321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.847478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.847505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.847671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.847696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.847859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.847898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.848070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.848097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.848269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.848294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.848439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.848464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.848659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.848687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.848856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.848891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.849055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.849080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.849228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.849253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.849380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.849405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.849552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.849577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.849750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.849780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.849955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.849980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.850125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.850150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.850324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.850352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.850481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.850508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.850690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.850715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.850905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.850935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.851131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.851157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.851306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.851346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.851513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.851537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.851714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.851741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.851898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.851927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.852062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.852090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.852294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.852319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.852479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.554 [2024-07-15 10:38:29.852507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.554 qpair failed and we were unable to recover it. 00:25:35.554 [2024-07-15 10:38:29.852697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.852725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.852903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.852928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.853101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.853126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.853293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.853325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.853465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.853493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.853686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.853714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.853854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.853886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.854068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.854093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.854230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.854255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.854400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.854425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.854634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.854659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.854818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.854846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.855028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.855053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.855199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.855225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.855446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.855471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.855660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.855688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.855822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.855850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.856053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.856081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.856289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.856314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.856505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.856533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.856658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.856685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.856841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.856868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.857046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.857072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.857219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.857262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.857422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.857449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.857638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.857666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.857851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.857884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.858012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.858037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.858173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.858201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.858357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.858385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.858559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.858584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.858747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.858775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.858934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.858963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.859128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.859155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.555 qpair failed and we were unable to recover it. 00:25:35.555 [2024-07-15 10:38:29.859307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.555 [2024-07-15 10:38:29.859332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.859512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.859537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.859708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.859736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.859910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.859936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.860089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.860114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.860258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.860283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.860488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.860517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.860683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.860710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.860867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.860903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.861069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.861098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.861278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.861306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.861468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.861496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.861665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.861691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.861857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.861892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.862021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.862049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.862201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.862226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.862372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.862397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.862521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.862546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.862715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.862743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.862909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.862938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.863105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.863130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.863299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.863326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.863482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.863509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.863706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.863734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.863900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.863925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.864088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.864116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.864281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.864309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.864540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.864592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.864733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.864759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.864923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.864966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.865103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.865130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.865398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.865452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.865627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.556 [2024-07-15 10:38:29.865651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.556 qpair failed and we were unable to recover it. 00:25:35.556 [2024-07-15 10:38:29.865780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.865805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.865934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.865959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.866132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.866173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.866345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.866370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.866491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.866516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.866668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.866694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.866869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.866919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.867059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.867084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.867233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.867274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.867408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.867436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.867624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.867651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.867810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.867838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.867997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.868023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.868176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.868201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.868407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.868435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.868598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.868622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.868780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.868812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.868969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.868995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.869144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.869185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.869355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.869379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.869547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.869574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.869773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.869801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.870002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.870028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.870147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.870172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.870322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.870365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.870535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.870562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.870721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.870749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.870894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.870920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.871066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.871092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.871273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.871300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.871441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.871469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.871637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.871662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.871830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.871858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.872010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.872035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.872162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.872187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.872354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.872383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.872572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.872600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.872738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.872765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.872969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.557 [2024-07-15 10:38:29.872995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.557 qpair failed and we were unable to recover it. 00:25:35.557 [2024-07-15 10:38:29.873166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.873191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.873381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.873409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.873565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.873592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.873751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.873779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.873946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.873972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.874091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.874116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.874275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.874299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.874494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.874553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.874730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.874755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.874926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.874954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.875144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.875171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.875430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.875480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.875683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.875708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.875871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.875907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.876035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.876062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.876226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.876254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.876408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.876435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.876587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.876618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.876767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.876792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.876941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.876970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.877107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.877132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.877284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.877309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.877453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.877478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.877646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.877674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.877844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.877870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.878030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.878055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.878249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.878278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.878439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.878466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.878672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.878696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.878900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.878929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.879085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.879112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.879309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.879360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.879532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.558 [2024-07-15 10:38:29.879559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.558 qpair failed and we were unable to recover it. 00:25:35.558 [2024-07-15 10:38:29.879753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.879781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.879944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.879973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.880172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.880196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.880376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.880401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.880549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.880576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.881189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.881220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.881543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.881601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.882057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.882086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.882280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.882309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.882452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.882480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.882738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.882790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.882967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.882993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.883192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.883220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.883385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.883413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.883546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.883575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.883725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.883750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.883889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.883916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.884094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.884122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.884386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.884446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.884618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.884645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.884819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.884847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.885052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.885078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.885335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.885384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.885558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.885584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.885747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.885780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.885946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.885975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.886163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.886198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.886372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.886396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.886545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.886587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.886745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.886784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.886966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.886995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.887160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.887185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.887303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.887329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.887489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.887517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.887678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.887706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.887855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.887886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.888064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.888107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.888245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.888274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.888447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.888476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.888612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.888637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.559 [2024-07-15 10:38:29.888825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.559 [2024-07-15 10:38:29.888853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.559 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.889064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.889089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.889222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.889247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.889424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.889449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.889618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.889645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.889803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.889830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.890009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.890035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.890183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.890210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.890370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.890398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.890522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.890550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.890703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.890731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.890907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.890941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.891066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.891107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.891293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.891321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.891569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.891618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.891787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.891812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.892005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.892034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.892223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.892252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.892489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.892517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.892686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.892711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.892831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.892857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.893045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.893073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.893289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.893341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.893509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.893535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.893685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.893732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.893922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.893950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.894084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.894112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.894278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.894303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.894418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.894442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.894614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.894641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.894775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.894804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.895011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.895037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.895206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.895234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.895396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.895424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.895591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.895616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.895766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.895791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.895934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.895960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.896158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.896186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.896448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.896498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.896691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.896716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.896847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.560 [2024-07-15 10:38:29.896871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.560 qpair failed and we were unable to recover it. 00:25:35.560 [2024-07-15 10:38:29.897009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.897034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.897203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.897231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.897404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.897429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.897617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.897645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.897786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.897814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.897999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.898024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.898172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.898198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.898369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.898398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.898569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.898595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.898783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.898810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.898993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.899019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.899195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.899220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.899364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.899391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.899554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.899584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.899751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.899776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.899966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.899995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.900155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.900191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.900388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.900415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.900580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.900605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.900753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.900777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.900908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.900934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.901116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.901144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.901317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.901342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.901465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.901494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.901612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.901637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.901757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.901781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.901954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.901979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.902106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.902133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.902323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.902351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.902553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.902580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.902749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.902773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.902902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.902928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.903078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.903103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.903280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.903307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.903479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.903505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.903669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.903698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.903857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.903894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.904087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.904115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.904299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.904324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.904478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.904503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.904642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.561 [2024-07-15 10:38:29.904670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.561 qpair failed and we were unable to recover it. 00:25:35.561 [2024-07-15 10:38:29.904859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.904894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.905088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.905113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.905262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.905288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.905460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.905484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.905659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.905686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.905882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.905908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.906044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.906069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.906236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.906263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.906508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.906536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.906676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.906702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.906873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.906937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.907108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.907136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.907413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.907472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.907650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.907675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.907840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.907868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.908044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.908072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.908334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.908379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.908551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.908577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.908706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.908748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.908891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.908926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.909083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.909111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.909284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.909309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.909483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.909517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.909701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.909727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.909885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.909911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.910031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.910056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.910238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.910266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.910402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.910429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.910568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.910598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.910816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.910844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.911073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.911099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.911287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.911312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.911460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.911485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.911653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.911678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.911846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.562 [2024-07-15 10:38:29.911873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.562 qpair failed and we were unable to recover it. 00:25:35.562 [2024-07-15 10:38:29.912038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.912066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.912274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.912325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.912472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.912498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.912646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.912688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.912851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.912887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.913067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.913092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.913236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.913261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.913427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.913455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.913605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.913633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.913754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.913782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.913926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.913953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.914101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.914126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.914272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.914300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.914489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.914517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.914689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.914715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.914889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.914927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.915087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.915115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.915378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.915430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.915599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.915623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.915789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.915816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.916021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.916050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.916095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca20e0 (9): Bad file descriptor 00:25:35.563 [2024-07-15 10:38:29.916338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.916376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.916555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.916601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.916746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.916790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.916944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.916970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.917096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.917122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.917307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.917349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.917491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.917535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.917689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.917714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.917858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.917892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.918059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.918085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.918261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.918304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.918462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.918504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.918676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.918718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.918871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.918903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.919068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.919112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.919297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.919340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.919539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.919567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.919731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.919757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.919887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.919920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.563 [2024-07-15 10:38:29.920083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.563 [2024-07-15 10:38:29.920131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.563 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.920303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.920346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.920507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.920550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.920693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.920718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.920866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.920899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.921103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.921132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.921331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.921360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.921583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.921626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.921745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.921771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.921938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.921984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.922151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.922200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.922406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.922448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.922570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.922595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.922751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.922778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.922951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.922995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.923163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.923191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.923376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.923419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.923568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.923592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.923720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.923745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.923912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.923938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.924143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.924186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.924386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.924429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.924576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.924602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.924719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.924744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.924894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.924928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.925074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.925118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.925317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.925361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.925493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.925519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.925679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.925704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.925884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.925915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.926082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.926125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.926273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.926315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.926481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.926523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.926647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.926672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.926823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.926848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.927055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.927098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.927253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.927284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.927450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.927478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.927612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.927641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.927799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.927827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.928004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.928035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.564 [2024-07-15 10:38:29.928197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.564 [2024-07-15 10:38:29.928223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.564 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.928365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.928392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.928554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.928581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.928769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.928797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.928950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.928984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.929138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.929164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.929340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.929368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.929502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.929529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.929722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.929750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.929949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.929975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.930122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.930146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.930323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.930350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.930516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.930546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.930687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.930716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.930874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.930927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.931049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.931074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.931187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.931212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.931398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.931425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.931688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.931742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.931909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.931951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.932102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.932128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.932359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.932425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.932624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.932652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.932815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.932843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.933021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.933046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.933192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.933219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.933365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.933393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.933555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.933584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.933774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.933801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.933981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.934006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.934156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.934181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.934321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.934348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.934516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.934544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.934703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.934731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.934921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.934963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.935089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.935114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.935241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.935267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.935422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.935463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.935623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.935651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.935869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.935931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.565 [2024-07-15 10:38:29.936088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.565 [2024-07-15 10:38:29.936113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.565 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.936241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.936266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.936422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.936449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.936608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.936635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.936769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.936796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.936958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.936983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.937133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.937173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.937333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.937361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.937549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.937636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.937795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.937823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.937993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.938019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.938140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.938164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.938281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.938322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.938484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.938511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.938706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.938734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.938909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.938934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.939107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.939132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.939308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.939333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.939500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.939530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.939673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.939701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.939943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.939969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.940145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.940188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.940375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.940403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.940578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.940603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.940758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.940783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.940953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.940982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.941156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.941181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.941308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.941348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.941512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.941539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.941735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.941760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.941936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.941965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.942101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.942128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.942263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.942288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.942405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.942446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.942618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.942643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.942765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.942791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.942915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.942941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.566 qpair failed and we were unable to recover it. 00:25:35.566 [2024-07-15 10:38:29.943089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.566 [2024-07-15 10:38:29.943115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.943296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.943321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.943436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.943481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.943647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.943674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.943860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.943892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.944065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.944093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.944284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.944312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.944478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.944504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.944662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.944689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.944810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.944838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.945011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.945036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.945186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.945214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.945363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.945387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.945533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.945559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.945728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.945755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.945890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.945932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.946064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.946089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.946220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.946244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.946388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.946417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.946558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.946583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.946698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.946724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.946895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.946924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.947086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.947111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.947285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.947313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.947476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.947504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.947644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.947669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.947793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.947819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.948015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.948040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.948179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.948204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.948381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.948409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.948571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.948599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.948768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.948793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.948959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.948988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.949115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.949144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.949285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.949310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.949459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.949483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.949655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.949680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.949853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.949890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.950052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.567 [2024-07-15 10:38:29.950077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.567 qpair failed and we were unable to recover it. 00:25:35.567 [2024-07-15 10:38:29.950287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.950315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.950474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.950499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.950645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.950670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.950839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.950870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.951062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.951087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.951247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.951276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.951443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.951471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.951639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.951664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.951815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.951840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.951998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.952024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.952136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.952161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.952353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.952381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.952571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.952599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.952738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.952762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.952901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.952942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.953112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.953140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.953312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.953337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.953488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.953529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.953668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.953697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.953863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.953900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.954034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.954059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.954250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.954277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.954472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.954497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.954657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.954685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.954859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.954901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.955065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.955091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.955260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.955287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.955448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.955475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.955634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.955659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.955853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.955888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.956056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.956084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.956288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.568 [2024-07-15 10:38:29.956312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.568 qpair failed and we were unable to recover it. 00:25:35.568 [2024-07-15 10:38:29.956478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.956506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.956642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.956670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.956839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.956866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.957016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.957041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.957200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.957227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.957364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.957388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.957555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.957596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.957787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.957814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.957996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.958022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.958174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.958199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.958361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.958385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.958532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.958563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.958734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.958759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.958958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.958987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.959159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.959185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.959319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.959344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.959492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.959517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.959633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.959658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.959849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.959883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.960046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.960074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.960213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.960239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.960389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.960413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.960558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.960583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.960736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.960761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.960938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.960966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.961142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.961170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.961330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.961355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.961492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.961533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.961701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.961728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.961907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.961934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.569 qpair failed and we were unable to recover it. 00:25:35.569 [2024-07-15 10:38:29.962053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.569 [2024-07-15 10:38:29.962078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.962261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.962289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.962452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.962477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.962638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.962666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.962830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.962857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.963025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.963052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.963244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.963272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.963429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.963457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.963623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.963648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.963769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.963794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.963982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.964008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.964145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.964170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.964308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.964350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.964517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.964544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.964687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.964729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.964882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.964908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.965059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.965083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.965256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.965281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.965447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.965475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.965643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.965670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.965799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.965827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.965968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.965998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.966202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.966229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.966377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.966402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.966519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.966543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.966731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.966758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.966898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.966925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.967095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.967120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.967313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.967338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.967476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.570 [2024-07-15 10:38:29.967501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.570 qpair failed and we were unable to recover it. 00:25:35.570 [2024-07-15 10:38:29.967694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.967722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.967924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.967950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.968122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.968147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.968279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.968304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.968479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.968504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.968671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.968699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.968847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.968872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.969000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.969025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.969175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.969200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.969360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.969388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.969572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.969600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.969771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.969796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.969922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.969947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.970095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.970120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.970269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.970294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.970445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.970470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.970588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.970612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.970758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.970782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.970937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.970964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.971125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.971153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.971284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.971309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.971439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.971464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.971642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.971666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.971816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.971840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.972018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.972046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.972202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.972230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.972425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.972449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.972577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.972604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.972734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.972763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.571 [2024-07-15 10:38:29.972969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.571 [2024-07-15 10:38:29.972995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.571 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.973137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.973164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.973358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.973387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.973564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.973589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.973721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.973750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.973901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.973930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.974087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.974112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.974318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.974345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.974488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.974517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.974661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.974685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.974857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.974890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.975067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.975095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.975291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.975316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.975508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.975535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.975662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.975689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.975847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.975871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.976074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.976103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.976293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.976321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.976511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.976536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.976663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.976705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.976858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.976893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.977037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.977061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.572 qpair failed and we were unable to recover it. 00:25:35.572 [2024-07-15 10:38:29.977260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-07-15 10:38:29.977288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.977451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.977478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.977676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.977700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.977865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.977901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.978025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.978053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.978224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.978248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.978419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.978446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.978573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.978601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.978798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.978826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.978969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.978995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.979125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.979164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.979307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.979332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.979455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.979479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.979682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.979706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.979885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.979911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.980074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.980101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.980269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.980297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.980463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.980488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.980649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.980676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.980843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.980870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.981047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.981072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.981246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.981273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.981416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.981443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.981585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.981609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.981756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.981798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.981987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.982013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.982166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.982191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.982314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.982339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.982490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.982515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.982664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.982689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.573 qpair failed and we were unable to recover it. 00:25:35.573 [2024-07-15 10:38:29.982854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-07-15 10:38:29.982890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.983063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.983090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.983291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.983315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.983482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.983510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.983651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.983680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.983819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.983846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.984004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.984047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.984214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.984238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.984385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.984411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.984533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.984575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.984706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.984733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.984896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.984922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.985086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.985114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.985254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.985282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.985477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.985502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.985672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.985699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.985862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.985896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.986091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.986120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.986321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.986348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.986476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.986504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.986648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.986673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.986817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.986857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.987047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.987073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.987198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.987223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.987342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.987367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.987515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.987539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.987660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.987684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.987852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.987887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.988047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.988075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.988205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.988230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.574 qpair failed and we were unable to recover it. 00:25:35.574 [2024-07-15 10:38:29.988363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-07-15 10:38:29.988387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.988599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.988627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.988764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.988789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.988946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.988972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.989114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.989139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.989314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.989339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.989509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.989538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.989730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.989758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.989904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.989929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.990044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.990069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.990205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.990232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.990400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.990425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.990593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.990620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.990747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.990774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.990910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.990936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.991079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.991103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.991240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.991268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.991459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.991484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.991646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.991673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.991849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.991874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.992024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.992049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.992216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.992245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.992449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.992474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.992614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.992639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.992799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.992826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.992969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.992995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.993178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.993203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.993334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.993366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.993531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.993558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.993701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.993728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.993882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.993924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.575 [2024-07-15 10:38:29.994085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.575 [2024-07-15 10:38:29.994113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.575 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.994306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.994330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.994530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.994558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.994690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.994718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.994912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.994938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.995132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.995160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.995315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.995343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.995538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.995563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.995729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.995757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.995917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.995946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.996126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.996151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.996314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.996341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.996523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.996551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.996689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.996714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.996864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.996911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.997075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.997103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.997295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.997320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.997485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.997512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.997711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.997736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.997886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.997911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.998027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.998052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.998226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.998251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.998366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.998391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.998584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.998612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.998767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.998794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.998960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.998986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.999143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.999171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.999336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.999363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.999534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.999558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.999668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.999708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.576 [2024-07-15 10:38:29.999882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.576 [2024-07-15 10:38:29.999911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.576 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.000088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.000112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.000286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.000313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.000486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.000513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.000688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.000712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.000922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.000950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.001108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.001140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.001323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.001348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.001491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.001532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.001674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.001704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.001895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.001938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.002097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.002122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.002287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.002316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.002469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.002494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.002674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.002702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.002884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.002909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.003049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.003074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.003224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.003253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.003402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.003439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.003613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.003644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.003798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.003833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.004993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.005034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.005201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.005230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.005358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.005384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.005560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.005603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.005778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.005803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.005985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.006012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.006141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.006167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.006306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.006332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.006452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.006493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.006621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.006649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.006845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.006870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.007022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.007047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.577 [2024-07-15 10:38:30.007250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.577 [2024-07-15 10:38:30.007278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.577 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.007429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.007454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.007603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.007628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.007788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.007816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.007994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.008019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.008200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.008228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.008383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.008411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.008560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.008585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.008701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.008726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.008846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.008882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.009035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.009060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.009237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.009265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.009403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.009432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.009603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.009632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.009827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.009854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.010026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.010054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.010226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.010250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.010418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.010446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.010613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.010642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.010816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.010842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.010974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.011000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.011123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.011148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.011322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.011347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.011548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.011573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.011716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.011741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.011930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.011956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.012124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.578 [2024-07-15 10:38:30.012152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.578 qpair failed and we were unable to recover it. 00:25:35.578 [2024-07-15 10:38:30.012319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.012347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.012519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.012544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.012711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.012738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.012900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.012929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.013095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.013119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.013296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.013324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.013496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.013524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.013693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.013717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.013870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.013901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.014065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.014093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.014272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.014297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.014419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.014459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.014648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.014676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.014851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.014882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.015036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.015060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.015202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.015230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.015376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.015401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.015527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.015552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.015671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.015712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.015983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.016010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.016127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.016177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.016314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.016342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.016509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.016534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.016689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.016733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.016868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.016904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.017069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.017094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.017237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.017265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.017442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.017467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.017606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.017631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.017757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.017799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.017964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.017993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.018154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.018180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.579 [2024-07-15 10:38:30.018347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.579 [2024-07-15 10:38:30.018375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.579 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.018535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.018564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.018709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.018734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.018907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.018933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.019077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.019102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.019272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.019297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.019442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.019472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.019663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.019691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.019842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.019867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.020071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.020099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.020238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.020266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.020459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.020485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.020619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.020647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.020791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.020819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.021023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.021049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.021217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.021244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.021414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.021441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.021613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.021638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.021791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.021816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.021946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.021972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.022144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.022174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.022361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.022389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.022594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.022619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.022766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.022791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.022909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.022935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.023085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.023110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.023242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.023268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.023462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.023490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.023648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.023676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.023823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.023852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.024035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.024061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.580 qpair failed and we were unable to recover it. 00:25:35.580 [2024-07-15 10:38:30.024220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.580 [2024-07-15 10:38:30.024245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.024396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.024421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.024588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.024615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.024750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.024783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.024981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.025007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.025173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.025201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.025341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.025369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.025551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.025576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.025740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.025769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.025962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.025991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.026153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.026178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.026302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.026343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.026503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.026531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.026721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.026746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.026910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.026940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.027104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.027133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.027311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.027336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.027513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.027541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.027700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.027728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.027903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.027929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.028094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.028122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.028310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.028338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.028510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.028536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.028729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.028757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.028948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.028976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.029121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.029147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.029309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.029334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.029509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.029537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.029696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.029723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.029870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.029922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.581 [2024-07-15 10:38:30.030082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.581 [2024-07-15 10:38:30.030107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.581 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.030268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.030293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.030429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.030458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.030589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.030616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.030762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.030788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.030937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.030963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.031127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.031152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.031311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.031335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.031484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.031525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.031652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.031682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.031845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.031870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.032084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.032112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.032271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.032299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.032440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.032470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.032596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.032621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.032739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.032765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.032886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.032931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.033082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.033110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.033271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.033299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.033473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.033499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.033642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.033667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.033814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.033838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.034148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.034173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.034304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.034348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.034512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.034540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.034678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.034704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.034832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.034857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.035029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.035054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.035205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.035230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.582 [2024-07-15 10:38:30.035373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.582 [2024-07-15 10:38:30.035400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.582 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.035556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.035584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.035729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.035754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.035904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.035930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.036107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.036132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.036253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.036278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.036418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.036442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.036593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.036617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.036812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.036839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.037026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.037055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.037221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.037250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.037417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.037442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.037625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.037653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.037816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.037844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.038018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.038043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.038194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.038219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.038383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.038408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.038558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.038582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.038751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.038778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.038944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.038970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.039118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.039142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.039300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.039328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.039493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.039521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.039691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.583 [2024-07-15 10:38:30.039716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.583 qpair failed and we were unable to recover it. 00:25:35.583 [2024-07-15 10:38:30.039911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.039945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.040087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.040114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.040284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.040308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.040481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.040510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.040677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.040704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.040847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.040873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.041063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.041091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.041221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.041249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.041419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.041444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.041589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.041630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.041771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.041799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.041972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.041997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.042147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.042172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.042346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.042373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.042551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.042577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.042729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.042753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.042908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.042934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.043052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.043076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.043221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.043262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.043413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.043440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.043587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.043612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.043804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.043831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.044024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.044049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.044172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.044199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.044371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.044399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.044586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.044614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.044816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.044841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.044999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.045027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.045190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.045217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.045383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.045408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.045571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.045598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.584 qpair failed and we were unable to recover it. 00:25:35.584 [2024-07-15 10:38:30.045727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.584 [2024-07-15 10:38:30.045756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.045936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.045962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.046130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.046157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.046289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.046317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.046512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.046536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.046655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.046698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.046863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.046898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.047040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.047066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.047239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.047266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.047441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.047470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.047620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.047645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.047813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.047842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.048007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.048033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.048175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.048200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.048328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.048370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.048506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.048533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.048732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.048757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.048887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.048913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.049035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.049060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.049290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.049315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.049487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.049515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.049641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.049668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.049863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.049896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.050021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.050046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.050196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.050222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.050397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.050421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.050542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.050568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.050714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.050739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.050890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.050916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.051069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.051094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.051232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.051259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.585 [2024-07-15 10:38:30.051426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.585 [2024-07-15 10:38:30.051451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.585 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.051630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.051656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.051805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.051830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.051979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.052005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.052165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.052192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.052376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.052403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.052578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.052602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.052778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.052803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.053005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.053031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.053173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.053198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.053318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.053359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.053510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.053537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.053707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.053732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.053895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.053923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.054093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.054119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.054295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.054320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.054434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.054475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.054628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.054654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.054819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.054849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.054979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.055005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.055148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.055188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.055347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.055372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.055562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.055588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.055710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.055737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.055898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.055924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.056089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.056116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.056266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.056292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.056435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.056460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.056614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.056639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.586 [2024-07-15 10:38:30.056774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.586 [2024-07-15 10:38:30.056800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.586 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.056946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.056973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.057122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.057147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.057320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.057346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.057501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.057526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.057696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.057721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.057869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.057900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.058062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.058087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.058202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.058227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.058372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.058397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.058542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.058567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.058711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.058738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.058887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.058913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.059065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.059091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.059253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.059279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.059440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.059465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.059643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.059669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.059844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.059869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.060037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.060063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.060236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.060261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.060411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.060436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.060562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.060588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.060735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.060760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.060921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.060947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.061122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.061147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.061293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.061318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.061465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.061490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.061611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.061636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.061755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.061781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.061918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.061948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.587 qpair failed and we were unable to recover it. 00:25:35.587 [2024-07-15 10:38:30.062075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.587 [2024-07-15 10:38:30.062100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.062258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.062284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.062405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.062430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.062573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.062598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.062748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.062772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.062920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.062945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.063121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.063146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.063265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.063290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.063411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.063437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.063554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.063579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.063750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.063775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.063903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.063929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.064077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.064102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.064260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.064285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.064434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.064459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.064600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.064624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.064770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.064795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.064948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.064974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.065125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.065150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.065275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.065300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.065441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.065466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.065614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.065639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.065760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.065785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.065942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.065967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.066084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.066109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.066241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.066266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.066437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.066478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.066640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.066667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.066795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.066821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.066945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.588 [2024-07-15 10:38:30.066972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.588 qpair failed and we were unable to recover it. 00:25:35.588 [2024-07-15 10:38:30.067103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.067131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.067355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.067381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.067546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.067575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.067744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.067772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.067968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.067994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.068150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.068176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.068349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.068375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.068497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.068522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.068641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.068667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.068813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.068849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.069055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.069081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.069260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.069313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.069512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.069540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.069698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.069732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.069908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.069952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.070128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.070170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.070337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.070363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.070555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.070583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.070740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.070768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.070933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.070960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.071115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.071141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.071287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.071314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.071463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.071488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.071647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.071675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.589 [2024-07-15 10:38:30.071823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.589 [2024-07-15 10:38:30.071849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.589 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.072002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.072027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.072150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.072177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.072330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.072356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.072508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.072533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.072684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.072710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.072859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.072891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.073077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.073102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.073219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.073244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.073413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.073441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.073609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.073634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.073788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.073815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.073945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.073971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.074097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.074122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.074270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.074294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.074441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.074467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.074592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.074617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.074770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.074814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.074988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.075016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.075169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.075195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.075344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.075407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.075593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.075621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.075788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.075813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.075964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.075990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.076141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.076166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.076292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.076317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.076507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.076533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.076679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.076704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.076872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.076917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.077077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.077102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.077244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.077269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.077390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.077414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.077557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.077582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.077733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.077758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.077890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.077916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.078090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.590 [2024-07-15 10:38:30.078115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.590 qpair failed and we were unable to recover it. 00:25:35.590 [2024-07-15 10:38:30.078258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.078283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.078398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.078423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.078569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.078611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.078781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.078809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.078981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.079007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.079134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.079161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.079337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.079366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.079568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.079594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.079728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.079757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.079909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.079939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.080138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.080164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.080405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.080459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.080650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.080678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.080872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.080903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.081073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.081101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.081266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.081294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.081492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.081526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.081756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.081802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.081971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.082002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.082166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.082192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.082317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.082361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.082508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.082536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.082683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.082709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.082856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.082886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.083034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.083059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.083209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.083233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.083363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.083398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.083557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.083586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.083767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.083794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.083968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.083994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.084111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.084137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.084309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.084335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.084499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.084527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.084685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.084712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.084887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.084912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.085110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.085137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.085328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.085356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.085528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.085553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.085719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.085748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.085953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.085979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.591 [2024-07-15 10:38:30.086131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.591 [2024-07-15 10:38:30.086156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.591 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.086373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.086422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.086565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.086593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.086768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.086794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.086970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.086996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.087144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.087185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.087354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.087379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.087505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.087546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.087714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.087742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.087939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.087965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.088135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.088162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.088290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.088317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.088514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.088539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.088662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.088686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.088799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.088824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.088973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.088999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.089198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.089230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.089392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.089421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.089583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.089608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.089760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.089803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.089930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.089959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.090101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.090126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.090254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.090279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.090403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.090428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.090575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.090600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.090728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.090772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.090940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.090969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.091102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.091126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.091241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.091266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.091394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.091419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.091607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.091633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.091822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.091850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.091990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.092015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.092133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.092158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.092278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.092302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.092503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.092531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.592 [2024-07-15 10:38:30.092676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.592 [2024-07-15 10:38:30.092701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.592 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.092815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.092840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.093026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.093052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.093204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.093229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.093425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.093452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.093581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.093609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.093758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.093783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.093940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.093965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.094131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.094161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.094324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.094348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.094522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.094550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.094718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.094743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.094895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.094921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.095070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.095095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.095231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.095259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.095430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.095455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.095623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.095651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.095837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.095864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.096025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.096050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.096226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.096251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.096400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.096459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.096601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.096626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.096757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.096782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.096956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.096984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.097165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.097190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.097318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.097345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.097513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.097540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.097732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.097760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.097897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.097940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.098089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.098114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.098258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.098283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.098453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.098481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.098608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.098636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.098796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.098821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.098976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.099001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.099156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.099181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.099340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.099365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.099533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.099561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.099750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.099777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.099941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.099966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.100091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.593 [2024-07-15 10:38:30.100116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.593 qpair failed and we were unable to recover it. 00:25:35.593 [2024-07-15 10:38:30.100237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.100263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.100383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.100409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.100599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.100627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.100796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.100824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.101026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.101052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.101202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.101228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.101381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.101406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.101528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.101553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.101670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.101695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.101845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.101874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.102053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.102078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.102240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.102268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.102436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.102461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.102632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.102657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.102832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.102860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.103007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.103035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.103206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.103231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.103372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.103413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.103579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.103607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.103773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.103802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.103994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.104023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.104222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.104247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.104400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.104425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.104628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.104656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.104813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.104841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.105012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.105037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.105149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.105190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.105375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.105403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.105548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.105572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.105726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.105752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.594 [2024-07-15 10:38:30.105954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.594 [2024-07-15 10:38:30.105979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.594 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.106153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.106178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.106355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.106382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.106551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.106579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.106779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.106805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.106983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.107009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.107164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.107192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.107362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.107387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.107556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.107583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.107745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.107773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.107919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.107945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.108091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.108133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.108289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.108317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.108480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.108505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.108622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.108647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.108793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.108818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.109074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.109100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.109265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.109292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.109454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.109482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.109647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.109673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.109864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.109898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.110029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.110056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.110226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.110253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.110420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.110447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.110622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.110647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.110799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.110824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.110974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.111000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.111191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.111219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.111417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.111441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.111609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.111641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.111828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.111856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.112043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.112068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.112263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.112290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.112423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.112451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.112623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.112648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.112822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.112849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.113029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.113054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.113167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.113192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.113333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.113358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.595 qpair failed and we were unable to recover it. 00:25:35.595 [2024-07-15 10:38:30.113502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.595 [2024-07-15 10:38:30.113544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.113738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.113765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.113933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.113959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.114085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.114110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.114291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.114315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.114450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.114478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.114636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.114664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.114805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.114830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.115006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.115032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.115158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.115184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.115399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.115424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.115557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.115584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.115739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.115767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.115937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.115963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.116082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.116107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.116257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.116281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.116456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.116481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.116693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.116720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.116859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.116895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.117093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.117118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.117284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.117312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.117447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.117476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.117623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.117650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.117797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.117822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.117989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.118015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.118187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.118212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.118336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.118361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.118507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.118532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.118679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.118703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.118824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.118867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.119027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.119058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.119211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.119236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.119404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.119431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.119564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.119592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.119735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.119761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.119953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.119982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.120147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.120178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.120382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.120407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.120559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.120600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.120786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.120813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.121008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.121034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.121214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.121241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.121441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.121469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.121639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.596 [2024-07-15 10:38:30.121665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.596 qpair failed and we were unable to recover it. 00:25:35.596 [2024-07-15 10:38:30.121823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.121849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.122029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.122054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.122278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.122302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.122468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.122496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.122655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.122682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.122849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.122882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.123028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.123053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.123243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.123271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.123440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.123465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.123589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.123614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.123757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.123782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.123957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.123983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.124119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.124147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.124352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.124380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.124552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.124577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.124703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.124728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.124888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.124913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.125074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.125099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.125250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.125291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.125482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.125510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.125673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.125698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.125860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.125895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.126060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.126087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.126248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.126273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.126443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.126470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.126660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.126688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.126857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.126894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.127047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.127071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.127264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.127291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.127465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.127491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.127648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.127675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.127869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.127909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.128039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.128064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.128209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.128233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.128347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.128372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.128522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.128547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.128689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.128718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.128899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.128925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.129076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.597 [2024-07-15 10:38:30.129101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.597 qpair failed and we were unable to recover it. 00:25:35.597 [2024-07-15 10:38:30.129255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.129298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.129463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.129491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.129688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.129713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.129874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.129909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.130106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.130131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.130273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.130298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.130461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.130489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.130679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.130707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.130872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.130903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.131031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.131055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.131209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.131250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.131393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.131418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.131539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.131564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.131733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.131761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.131931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.131957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.132089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.132114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.132231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.132255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.132401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.132425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.132595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.132624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.132813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.132840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.133045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.133071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.133237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.133264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.133423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.133452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.133595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.133620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.133770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.133794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.133944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.133971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.134119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.134144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.134263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.134310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.134461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.134489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.134654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.134679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.134802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.134842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.134999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.135025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.135171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.135197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.135362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.135390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.135577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.135605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.135772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.598 [2024-07-15 10:38:30.135797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.598 qpair failed and we were unable to recover it. 00:25:35.598 [2024-07-15 10:38:30.135943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.135968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.136113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.136140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.136307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.136332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.136483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.136507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.136635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.136660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.136839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.136864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.137020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.137052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.137259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.137287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.137448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.137474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.137598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.137639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.137824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.137851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.138001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.138027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.138175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.138217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.138380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.138409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.138553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.138579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.138752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.138777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.138923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.138952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.139110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.139136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.139250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.139290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.139451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.139480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.139616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.139641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.139784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.139809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.139966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.139992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.140143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.140168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.140290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.140331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.140518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.140546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.140687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.140712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.140860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.140908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.141054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.141082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.141252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.141277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.141470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.141498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.141685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.141717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.141905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.141947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.142103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.142127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.142298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.142326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.142502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.142527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.142688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.142715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.142849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.142886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.143064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.143088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.143207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.143232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.143369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.143394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.143543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.143567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.143682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.143707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.143855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.143888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.144065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.144090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.144264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.599 [2024-07-15 10:38:30.144292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.599 qpair failed and we were unable to recover it. 00:25:35.599 [2024-07-15 10:38:30.144456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.144483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.144649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.144674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.144840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.144867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.145065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.145093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.145258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.145284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.145460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.145488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.145625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.145652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.145818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.145843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.146016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.146045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.146205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.146235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.146427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.146452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.146583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.146608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.146762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.146787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.146910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.146937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.147105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.147133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.147270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.147300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.147465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.147490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.147653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.147681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.147886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.147912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.148089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.148114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.148269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.148296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.148421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.148448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.148584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.148609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.148757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.148797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.148957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.148985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.149118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.149146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.149326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.149354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.149516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.149544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.149741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.149769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.149934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.149960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.150107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.150131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.150279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.150304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.150433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.150457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.150629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.150657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.150821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.150846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.151051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.151079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.151206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.151234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.151413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.151438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.151558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.151583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.151704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.151731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.151885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.151910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.152051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.152078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.152211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.152238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.152409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.152435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.152566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.152590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.152772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.152797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.152992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.153017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.600 [2024-07-15 10:38:30.153184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.600 [2024-07-15 10:38:30.153213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.600 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.153373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.153401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.153608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.153633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.153799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.153827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.153983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.154008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.154185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.154225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.154383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.154410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.154548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.154590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.154767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.154809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.154988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.155014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.155158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.155183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.155416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.155442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.155589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.155633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.155759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.155785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.155939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.155965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.156128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.156171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.156406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.156449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.156616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.156659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.156775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.156805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.156978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.157023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.157199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.157241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.157440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.157484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.157684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.157727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.157902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.157947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.158122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.158164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.158328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.158371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.158543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.158585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.158758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.158783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.158952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.158996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.159140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.159183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.159350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.159378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.159537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.159565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.159737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.159763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.159918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.159945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.160096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.160122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.160301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.160326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.160478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.160503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.160649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.160690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.160851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.160893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.161106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.161134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.161284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.161312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.161496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.161552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.161819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.161870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.162046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.162074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.162239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.162266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.162404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.162437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.162614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.162638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.162764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.162789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.601 qpair failed and we were unable to recover it. 00:25:35.601 [2024-07-15 10:38:30.162934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.601 [2024-07-15 10:38:30.162960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.163101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.163129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.163279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.163307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.163460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.163487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.163618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.163646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.163828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.163853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.164004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.164029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.164305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.164363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.164631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.164680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.164854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.164887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.165088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.165113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.165284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.165313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.165453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.165481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.165646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.165673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.165860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.165895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.166063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.166088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.166208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.166232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.166396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.166423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.166581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.166609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.166742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.166771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.166940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.166979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.167143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.167171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.167371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.167399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.167542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.167571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.167746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.167779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.167926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.167952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.168099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.168125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.168269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.168298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.168496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.168523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.168697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.168721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.168895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.168936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.169052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.169077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.169255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.169297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.169428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.169456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.169621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.169649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.169774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.169802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.602 [2024-07-15 10:38:30.169974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.602 [2024-07-15 10:38:30.170000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.602 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.170119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.170144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.170290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.170333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.170494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.170522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.170662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.170697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.170843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.170897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.171062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.171087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.171239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.171264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.171432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.171459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.171598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.171626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.171766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.171794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.171975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.172002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.172165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.172192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.172385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.172409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.172541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.172609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.172750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.172777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.172946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.172971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.173126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.173151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.173328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.173355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.173506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.173540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.173664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.173688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.173838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.173862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.174010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.174035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.174232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.174259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.174425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.174452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.174596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.174620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.174737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.174761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.174941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.174966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.603 [2024-07-15 10:38:30.175117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.603 [2024-07-15 10:38:30.175143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.603 qpair failed and we were unable to recover it. 00:25:35.888 [2024-07-15 10:38:30.175336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.888 [2024-07-15 10:38:30.175370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.888 qpair failed and we were unable to recover it. 00:25:35.888 [2024-07-15 10:38:30.175528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.175556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.175728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.175753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.175898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.175924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.176050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.176076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.176226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.176251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.176373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.176414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.176570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.176598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.176741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.176766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.176890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.176915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.177029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.177054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.177172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.177197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.177429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.177484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.177674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.177701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.177863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.177898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.178061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.178086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.178231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.178272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.178443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.178467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.178640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.178667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.178800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.178828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.179010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.179035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.179201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.179228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.179380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.179408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.179573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.179597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.179763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.179790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.179964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.179990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.180141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.180166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.180284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.180330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.180526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.180554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.180709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.180733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.180902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.180931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.181098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.181123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.181250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.181274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.181425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.181452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.181597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.181624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.181763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.181787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.181954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.181994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.182199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.182228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.182398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.182425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.182620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.182648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.182812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.182840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.183013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.889 [2024-07-15 10:38:30.183039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.889 qpair failed and we were unable to recover it. 00:25:35.889 [2024-07-15 10:38:30.183242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.183271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.183415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.183443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.183641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.183665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.183858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.183892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.184042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.184070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.184231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.184256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.184428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.184490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.184643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.184670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.184839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.184864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.185000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.185038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.185217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.185247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.185414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.185440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.185560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.185606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.185741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.185769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.185945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.185971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.186098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.186123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.186236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.186262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.186386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.186411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.186556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.186598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.186756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.186784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.186954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.186980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.187099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.187141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.187275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.187302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.187437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.187463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.187606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.187647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.187812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.187842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.188018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.188043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.188171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.188212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.188372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.188399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.188561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.188586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.188780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.188808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.188988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.189014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.189165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.189190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.189334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.189359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.189538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.189566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.189737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.189763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.189961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.189989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.190133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.190160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.190306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.190332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.190537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.190565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.890 qpair failed and we were unable to recover it. 00:25:35.890 [2024-07-15 10:38:30.190759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.890 [2024-07-15 10:38:30.190787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.190954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.190980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.191148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.191176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.191363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.191391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.191582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.191606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.191746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.191773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.191960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.191989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.192135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.192160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.192334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.192359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.192536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.192561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.192714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.192740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.192898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.192924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.193070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.193100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.193277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.193302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.193452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.193504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.193643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.193673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.193844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.193869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.194053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.194078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.194276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.194303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.194447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.194472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.194597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.194623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.194767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.194795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.194956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.194982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.195134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.195159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.195287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.195327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.195492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.195517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.195668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.195694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.195842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.195893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.196064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.196089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.196204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.196243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.196373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.196400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.196575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.196601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.196777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.196818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.196986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.197011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.197161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.197186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.197300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.197341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.197478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.197506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.197697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.197722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.197851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.197885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.198050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.198075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.198198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.891 [2024-07-15 10:38:30.198223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.891 qpair failed and we were unable to recover it. 00:25:35.891 [2024-07-15 10:38:30.198332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.198358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.198501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.198529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.198693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.198717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.198907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.198951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.199080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.199105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.199258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.199283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.199450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.199479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.199640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.199668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.199829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.199853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.200008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.200033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.200197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.200225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.200363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.200388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.200538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.200580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.200743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.200770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.200917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.200943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.201112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.201137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.201278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.201303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.201451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.201476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.201655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.201683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.201874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.201931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.202081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.202105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.202261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.202289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.202459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.202488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.202657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.202682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.202834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.202862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.203007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.203035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.203188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.203212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.203356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.203381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.203504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.203528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.203704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.203728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.203900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.203929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.204083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.204110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.204282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.204307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.204430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.204473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.204664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.204693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.204860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.204893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.205066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.205091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.205226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.205253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.205427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.205456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.205622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.205649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.892 qpair failed and we were unable to recover it. 00:25:35.892 [2024-07-15 10:38:30.205813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.892 [2024-07-15 10:38:30.205842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.206033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.206058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.206211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.206236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.206361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.206386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.206569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.206595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.206759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.206786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.206930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.206960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.207168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.207193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.207331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.207358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.207488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.207515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.207681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.207706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.207863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.207898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.208066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.208094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.208233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.208257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.208407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.208432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.208583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.208610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.208746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.208771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.208887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.208912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.209021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.209046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.209194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.209219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.209373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.209400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.209550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.209577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.209795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.209823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.210019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.210045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.210169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.210194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.210388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.210413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.210579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.210606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.210762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.210789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.210993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.211019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.211187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.893 [2024-07-15 10:38:30.211214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.893 qpair failed and we were unable to recover it. 00:25:35.893 [2024-07-15 10:38:30.211355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.211383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.211547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.211572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.211737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.211764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.211932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.211961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.212127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.212151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.212299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.212340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.212466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.212494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.212670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.212696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.212898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.212931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.213059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.213086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.213217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.213242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.213366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.213407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.213543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.213570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.213715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.213740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.213889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.213932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.214120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.214148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.214343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.214368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.214532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.214559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.214750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.214777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.214927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.214954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.215074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.215099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.215245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.215274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.215459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.215484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.215678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.215706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.215868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.215903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.216066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.216091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.216215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.216259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.216426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.216454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.216656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.216681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.216807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.216833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.216987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.217014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.217164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.217189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.217327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.217356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.217543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.217571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.217760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.217785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.217959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.217988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.218183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.218208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.218359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.218385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.218506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.218530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.218683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.218708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.218831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.894 [2024-07-15 10:38:30.218856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.894 qpair failed and we were unable to recover it. 00:25:35.894 [2024-07-15 10:38:30.219023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.219061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.219203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.219231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.219403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.219428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.219573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.219615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.219742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.219769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.219906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.219932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.220079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.220120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.220281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.220314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.220463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.220488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.220637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.220662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.220830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.220859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.221008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.221034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.221199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.221227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.221393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.221421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.221581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.221605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.221794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.221821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.222004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.222030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.222179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.222204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.222439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.222489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.222648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.222675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.222836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.222863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.223059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.223097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.223305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.223334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.223519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.223544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.223785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.223840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.224027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.224053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.224175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.224200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.224345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.224370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.224519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.224544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.224666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.224691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.224849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.224894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.225049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.225075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.225221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.225246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.225406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.225434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.225606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.225639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.225789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.225814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.225946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.225973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.226098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.226123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.226271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.226295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.226488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.226545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.226733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.895 [2024-07-15 10:38:30.226760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.895 qpair failed and we were unable to recover it. 00:25:35.895 [2024-07-15 10:38:30.226913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.226939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.227066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.227092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.227237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.227262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.227408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.227434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.227571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.227599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.227766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.227794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.227964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.227989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.228147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.228192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.228320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.228349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.228516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.228541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.228701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.228726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.228897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.228926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.229081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.229106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.229225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.229267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.229426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.229454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.229628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.229653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.229776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.229801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.229951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.229977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.230154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.230179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.230341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.230368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.230569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.230595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.230771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.230796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.230961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.230989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.231149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.231176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.231313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.231340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.231491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.231535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.231707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.231732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.231911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.231936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.232105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.232132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.232272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.232299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.232463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.232488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.232616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.232659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.232847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.232881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.233075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.233104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.233276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.233303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.233432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.233461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.233597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.233621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.233795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.233836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.234038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.234064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.234220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.234245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.896 [2024-07-15 10:38:30.234382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.896 [2024-07-15 10:38:30.234409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.896 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.234606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.234631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.234781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.234806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.234973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.235002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.235139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.235166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.235365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.235390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.235553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.235581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.235749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.235777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.235919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.235945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.236065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.236090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.236267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.236295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.236440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.236466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.236592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.236617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.236767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.236808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.237003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.237029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.237197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.237225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.237416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.237444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.237593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.237617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.237767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.237809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.237974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.238002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.238204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.238229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.238403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.238431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.238619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.238646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.238788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.238812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.238938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.238964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.239167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.239195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.239359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.239384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.239503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.239545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.239679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.239706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.239846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.239872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.240035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.240078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.240238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.240266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.240411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.240437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.240585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.240609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.897 qpair failed and we were unable to recover it. 00:25:35.897 [2024-07-15 10:38:30.240793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.897 [2024-07-15 10:38:30.240820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.241017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.241043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.241169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.241194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.241336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.241365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.241531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.241556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.241748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.241776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.241957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.241983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.242095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.242119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.242234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.242259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.242406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.242434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.242603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.242629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.242815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.242843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.243052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.243078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.243208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.243234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.243385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.243427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.243599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.243626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.243763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.243788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.243911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.243936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.244116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.244144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.244321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.244346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.244495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.244519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.244719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.244746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.244890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.244917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.245064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.245088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.245218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.245242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.245416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.245441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.245645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.245678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.245813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.245842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.246017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.246043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.246238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.246266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.246450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.246477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.246648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.246673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.246789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.246830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.247005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.247033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.247195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.247220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.247338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.247380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.247537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.247565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.247707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.247732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.247858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.247891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.248013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.248038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.248215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.898 [2024-07-15 10:38:30.248239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.898 qpair failed and we were unable to recover it. 00:25:35.898 [2024-07-15 10:38:30.248372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.248399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.248566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.248594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.248778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.248805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.248945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.248970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.249112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.249137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.249313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.249337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.249491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.249516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.249698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.249722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.249944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.249969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.250159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.250187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.250342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.250370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.250535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.250560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.250735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.250763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.250928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.250957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.251121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.251145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.251305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.251332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.251520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.251548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.251721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.251745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.251917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.251946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.252152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.252177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.252349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.252374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.252563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.252590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.252709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.252736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.252886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.252912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.253029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.253054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.253216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.253248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.253456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.253481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.253612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.253640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.253809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.253837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.254026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.254053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.254215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.254243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.254404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.254431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.254629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.254654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.254777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.254801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.254946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.254975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.255141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.255166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.255355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.255382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.255569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.255597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.255795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.255820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.255987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.256012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.899 [2024-07-15 10:38:30.256175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.899 [2024-07-15 10:38:30.256202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.899 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.256378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.256403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.256570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.256599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.256765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.256792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.256932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.256957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.257109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.257133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.257302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.257330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.257496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.257520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.257684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.257711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.257846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.257874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.258031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.258056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.258258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.258285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.258428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.258457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.258619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.258644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.258809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.258837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.258987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.259016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.259211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.259236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.259399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.259426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.259583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.259612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.259781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.259806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.259971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.260000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.260208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.260233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.260405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.260430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.260588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.260616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.260750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.260777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.260951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.260983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.261127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.261167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.261313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.261341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.261506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.261531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.261699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.261726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.261939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.261965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.262081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.262106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.262252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.262298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.262449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.262476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.262640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.262665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.262805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.262847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.263047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.263075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.263207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.263232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.263352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.263377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.263533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.263561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.263696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.900 [2024-07-15 10:38:30.263721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.900 qpair failed and we were unable to recover it. 00:25:35.900 [2024-07-15 10:38:30.263870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.263903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.264080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.264108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.264303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.264327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.264521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.264548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.264718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.264745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.264888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.264914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.265041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.265066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.265233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.265260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.265428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.265453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.265612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.265639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.265775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.265802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.266003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.266029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.266154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.266179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.266299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.266325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.266471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.266495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.266686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.266714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.266909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.266937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.267073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.267098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.267249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.267274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.267392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.267416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.267536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.267561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.267726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.267754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.267882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.267925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.268076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.268101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.268249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.268278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.268430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.268455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.268603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.268628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.268777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.268801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.268970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.268999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.269166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.269192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.269381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.269408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.269571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.269599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.269762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.269787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.269939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.269965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.270118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.270143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.270319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.901 [2024-07-15 10:38:30.270344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.901 qpair failed and we were unable to recover it. 00:25:35.901 [2024-07-15 10:38:30.270512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.270540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.270695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.270723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.270865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.270897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.271052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.271092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.271252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.271280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.271442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.271467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.271628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.271655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.271808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.271836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.272010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.272035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.272151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.272195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.272352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.272380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.272513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.272537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.272664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.272688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.272884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.272912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.273072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.273097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.273230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.273258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.273419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.273447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.273594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.273620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.273764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.273807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.273981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.274007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.274145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.274170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.274356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.274383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.274537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.274564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.274699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.274724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.274850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.274899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.275090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.275118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.275314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.275338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.275500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.275528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.275682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.275713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.275875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.275906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.276033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.276058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.276202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.276227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.276345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.276370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.276545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.276570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.276739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.276767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.276930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.276956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.277096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.277120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.277287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.277315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.277454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.277479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.902 qpair failed and we were unable to recover it. 00:25:35.902 [2024-07-15 10:38:30.277604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.902 [2024-07-15 10:38:30.277629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.277824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.277851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.278014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.278039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.278174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.278199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.278377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.278419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.278560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.278585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.278781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.278809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.278978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.279004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.279176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.279200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.279390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.279418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.279544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.279572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.279721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.279745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.279901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.279927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.280075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.280100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.280224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.280250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.280420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.280448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.280590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.280617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.280768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.280793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.280967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.280993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.281106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.281130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.281253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.281278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.281423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.281448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.281622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.281649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.281812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.281837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.282017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.282046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.282214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.282242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.282427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.282452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.282586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.282614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.282777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.282806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.282975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.283006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.283116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.283158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.283322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.283350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.283521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.283545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.283696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.283721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.283870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.283906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.284082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.284107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.284236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.284261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.284370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.284395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.284540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.284565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.284690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.284733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.284938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.903 [2024-07-15 10:38:30.284963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.903 qpair failed and we were unable to recover it. 00:25:35.903 [2024-07-15 10:38:30.285111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.285136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.285301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.285329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.285463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.285490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.285629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.285653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.285804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.285845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.285984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.286012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.286163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.286188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.286301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.286326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.286524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.286552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.286708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.286733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.286973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.287002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.287169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.287196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.287392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.287416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.287583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.287611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.287797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.287824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.288014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.288040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.288166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.288191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.288378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.288403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.288554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.288579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.288703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.288728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.288884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.288910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.289026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.289053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.289200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.289243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.289435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.289463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.289611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.289652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.289841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.289868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.290029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.290055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.290201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.290226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.290352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.290381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.290505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.290531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.290680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.290704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.290821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.290864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.291049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.291074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.291215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.291240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.291381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.291406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.291554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.291578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.291776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.291815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.291997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.292024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.292152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.292178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.292355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.292384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.292575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.904 [2024-07-15 10:38:30.292622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.904 qpair failed and we were unable to recover it. 00:25:35.904 [2024-07-15 10:38:30.292769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.292794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.292930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.292958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.293100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.293144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.293310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.293353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.293529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.293571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.293720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.293746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.293934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.293963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.294187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.294229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.294401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.294443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.294642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.294684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.294814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.294839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.295040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.295084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.295291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.295334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.295637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.295689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.295865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.295897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.296044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.296070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.296222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.296265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.296426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.296468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.296647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.296693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.296888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.296919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.297044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.297070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.297241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.297269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.297432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.297474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.297638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.297681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.297832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.297858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.298040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.298085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.298262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.298308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.298483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.298531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.298658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.298684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.298837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.298862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.299065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.299108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.299255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.299299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.299459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.299502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.299644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.299669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.299843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.299868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.300075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.300117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.300285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.300328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.300552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.300603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.300753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.300778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.300937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.300965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.301141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.905 [2024-07-15 10:38:30.301184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.905 qpair failed and we were unable to recover it. 00:25:35.905 [2024-07-15 10:38:30.301424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.301467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.301607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.301649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.301799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.301824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.302021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.302065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.302238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.302280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.302421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.302463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.302644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.302688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.302841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.302866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.303035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.303063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.303246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.303289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.303456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.303500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.303642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.303670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.303833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.303858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.304063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.304106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.304278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.304321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.304494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.304537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.304711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.304736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.304909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.304954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.305129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.305158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.305346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.305374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.305509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.305535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.305711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.305736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.305948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.305976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.306140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.306182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.306379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.306407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.306571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.306596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.306771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.306801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.306937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.306966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.307160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.307185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.307329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.307371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.307603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.307645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.307793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.307818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.308007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.308051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.906 [2024-07-15 10:38:30.308248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.906 [2024-07-15 10:38:30.308275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.906 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.308487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.308514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.308652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.308676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.308821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.308846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.309036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.309065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.309254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.309296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.309468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.309512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.309689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.309714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.309901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.309928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.310101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.310145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.310317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.310358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.310547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.310573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.310736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.310762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.310963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.311007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.311174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.311216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.311367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.311411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.311575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.311618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.311771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.311796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.311984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.312028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.312201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.312244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.312395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.312438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.312558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.312583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.312734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.312761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.312948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.312995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.313162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.313205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.313412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.313455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.313586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.313615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.313771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.313797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.313922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.313948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.314091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.314134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.314335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.314377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.314500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.314525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.314703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.314728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.314903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.314953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.315123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.315165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.315317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.315359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.315505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.315530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.315677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.315702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.315825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.315850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.316001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.316030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.316243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.907 [2024-07-15 10:38:30.316270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.907 qpair failed and we were unable to recover it. 00:25:35.907 [2024-07-15 10:38:30.316468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.316510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.316683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.316708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.316894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.316922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.317071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.317113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.317280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.317323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.317490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.317533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.317669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.317694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.317872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.317903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.318104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.318147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.318290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.318333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.318517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.318543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.318673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.318699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.318880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.318907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.319108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.319149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.319317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.319359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.319509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.319554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.319698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.319723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.319898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.319924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.320101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.320145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.320312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.320354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.320553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.320582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.320722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.320750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.320936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.320964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.321084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.321111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.321300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.321327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.321466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.321555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.321710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.321738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.321940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.321967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.322116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.322143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.322305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.322334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.322459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.322486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.322677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.322705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.322881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.322924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.323085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.323110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.323233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.323274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.323434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.323462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.323645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.323673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.323837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.323862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.324048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.324073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.324271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.324299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.324462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.908 [2024-07-15 10:38:30.324527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.908 qpair failed and we were unable to recover it. 00:25:35.908 [2024-07-15 10:38:30.324684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.324711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.324917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.324943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.325090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.325116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.325312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.325340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.325502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.325530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.325710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.325742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.325886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.325912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.326092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.326117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.326257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.326282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.326423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.326464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.326626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.326669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.326834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.326858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.326990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.327015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.327129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.327154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.327293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.327320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.327458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.327485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.327640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.327668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.327815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.327840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.327994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.328019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.328134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.328181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.328404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.328432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.328594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.328622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.328785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.328812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.329001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.329027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.329191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.329219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.329352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.329380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.329535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.329562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.329719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.329746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.329910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.329951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.330073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.330098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.330239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.330281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.330417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.330444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.330643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.330671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.330805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.330832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.331031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.331056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.331168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.331193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.331342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.331386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.331578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.331605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.331753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.331782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.331956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.331982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.332174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.332202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.909 [2024-07-15 10:38:30.332365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.909 [2024-07-15 10:38:30.332389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.909 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.332585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.332613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.332768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.332796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.332967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.332993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.333119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.333144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.333295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.333321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.333527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.333599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.333782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.333809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.333957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.333983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.334130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.334155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.334283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.334308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.334435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.334460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.334609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.334635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.334786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.334826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.334979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.335005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.335150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.335175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.335316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.335340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.335502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.335529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.335658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.335683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.335842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.335890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.336047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.336075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.336272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.336296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.336463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.336490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.336626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.336653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.336795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.336821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.336943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.336970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.337119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.337144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.337291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.337316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.337466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.337509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.337645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.337672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.337817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.337842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.337994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.338035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.338173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.338205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.338349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.338374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.338548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.338591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.910 qpair failed and we were unable to recover it. 00:25:35.910 [2024-07-15 10:38:30.338727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.910 [2024-07-15 10:38:30.338755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.338896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.338922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.339113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.339140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.339302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.339330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.339486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.339511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.339682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.339706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.339843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.339870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.340052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.340079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.340188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.340228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.340353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.340380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.340550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.340575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.340756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.340781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.340924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.340953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.341146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.341171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.341346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.341374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.341540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.341567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.341711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.341736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.341925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.341954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.342116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.342144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.342317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.342342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.342454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.342496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.342627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.342655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.342842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.342870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.343016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.343041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.343170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.343197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.343363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.343388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.343580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.343607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.343736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.343764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.343936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.343961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.344109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.344134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.344279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.344304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.344449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.344474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.344638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.344665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.344798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.344825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.344978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.345003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.345155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.345180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.345326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.345367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.345512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.345537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.345708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.345753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.345896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.345925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.346064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.346089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.346208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.911 [2024-07-15 10:38:30.346233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.911 qpair failed and we were unable to recover it. 00:25:35.911 [2024-07-15 10:38:30.346383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.346408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.346526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.346551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.346779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.346807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.346959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.346988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.347138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.347162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.347281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.347306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.347432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.347456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.347630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.347655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.347820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.347847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.347996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.348021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.348183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.348208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.348399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.348427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.348586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.348614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.348802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.348827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.348977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.349003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.349200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.349228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.349397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.349422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.349541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.349582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.349743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.349771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.349914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.349940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.350086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.350112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.350249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.350277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.350451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.350476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.350592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.350637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.350773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.350801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.350966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.350991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.351142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.351167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.351316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.351358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.351498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.351523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.351694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.351735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.351895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.351924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.352091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.352117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.352238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.352278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.352440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.352468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.352635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.352660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.352806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.352848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.353046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.353071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.353225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.353250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.353398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.353423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.353564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.353592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.353741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.912 [2024-07-15 10:38:30.353766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.912 qpair failed and we were unable to recover it. 00:25:35.912 [2024-07-15 10:38:30.354009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.354037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.354223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.354251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.354389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.354414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.354540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.354567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.354737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.354765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.354956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.354981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.355121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.355149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.355331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.355359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.355500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.355525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.355649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.355674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.355824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.355850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.355996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.356022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.356156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.356199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.356360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.356387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.356550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.356575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.356722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.356747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.356904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.356930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.357080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.357106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.357275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.357304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.357492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.357519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.357708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.357736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.357912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.357938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.358087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.358112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.358254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.358284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.358404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.358445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.358620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.358648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.358791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.358816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.358934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.358960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.359116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.359143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.359312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.359337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.359485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.359527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.359680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.359708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.359881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.359906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.360029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.360054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.360166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.360191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.360364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.360389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.360552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.360580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.360719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.360747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.360917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.360943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.361060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.361086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.361255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.361283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.913 qpair failed and we were unable to recover it. 00:25:35.913 [2024-07-15 10:38:30.361479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.913 [2024-07-15 10:38:30.361505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.361658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.361683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.361830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.361871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.362051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.362077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.362279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.362307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.362447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.362476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.362637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.362662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.362782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.362823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.362985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.363011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.363133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.363163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.363328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.363356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.363546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.363574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.363744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.363768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.363914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.363940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.364136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.364164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.364307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.364332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.364459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.364485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.364661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.364689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.364831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.364858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.365000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.365025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.365217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.365244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.365404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.365429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.365581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.365606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.365757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.365798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.366007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.366033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.366191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.366219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.366386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.366414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.366612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.366637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.366800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.366828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.366986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.367014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.367183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.367208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.367317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.367357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.367521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.367549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.914 [2024-07-15 10:38:30.367679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.914 [2024-07-15 10:38:30.367703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.914 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.367820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.367845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.368025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.368050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.368176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.368201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.368403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.368431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.368568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.368596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.368788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.368813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.368969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.368995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.369145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.369170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.369351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.369376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.369520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.369562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.369691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.369719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.369909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.369935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.370048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.370090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.370251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.370278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.370478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.370503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.370641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.370669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.370841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.370882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.371052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.371076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.371235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.371263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.371452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.371479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.371612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.371637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.371780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.371806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.371983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.372012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.372178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.372203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.372345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.372386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.372551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.372579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.372758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.372785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.372905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.372948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.373078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.373102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.373250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.373275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.373400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.373425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.373566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.373594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.373791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.373816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.373955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.373983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.374127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.374155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.915 qpair failed and we were unable to recover it. 00:25:35.915 [2024-07-15 10:38:30.374325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.915 [2024-07-15 10:38:30.374350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.374464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.374506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.374661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.374688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.374854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.374883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.375012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.375038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.375179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.375204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.375356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.375381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.375532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.375574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.375757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.375789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.375957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.375983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.376110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.376150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.376338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.376366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.376530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.376555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.376715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.376743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.376932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.376960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.377147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.377172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.377350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.377378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.377536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.377563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.377699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.377724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.377867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.377924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.378116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.378144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.378289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.378314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.378465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.378507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.378639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.378666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.378837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.378861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.379086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.379114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.379278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.379306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.379477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.379501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.379650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.379676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.379810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.379837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.380012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.380038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.380188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.380213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.380354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.380379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.380527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.380552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.916 [2024-07-15 10:38:30.380743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.916 [2024-07-15 10:38:30.380771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.916 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.380977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.381003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.381133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.381160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.381274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.381298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.381466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.381494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.381654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.381679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.381818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.381843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.381966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.381992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.382114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.382140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.382281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.382324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.382498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.382526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.382721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.382746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.382914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.382943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.383076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.383104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.383267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.383292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.383417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.383462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.383622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.383650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.383779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.383804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.383912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.383938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.384086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.384114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.384261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.384285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.384407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.384433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.384615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.384643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.384779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.384804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.384962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.384988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.385137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.385178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.385384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.385409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.385575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.385603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.385787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.385815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.385987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.386012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.386202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.386230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.386359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.386386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.386557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.386581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.386738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.386766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.386917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.386959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.387114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.387139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.387283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.387325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.387480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.917 [2024-07-15 10:38:30.387508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.917 qpair failed and we were unable to recover it. 00:25:35.917 [2024-07-15 10:38:30.387675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.387700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.387864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.387896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.388051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.388078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.388216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.388240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.388384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.388423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.388587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.388615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.388789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.388814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.388936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.388961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.389078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.389102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.389276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.389300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.389469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.389496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.389690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.389717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.389863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.389905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.390033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.390075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.390273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.390300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.390444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.390469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.390598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.390622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.390819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.390846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.391027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.391056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.391237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.391266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.391403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.391431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.391576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.391601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.391746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.391771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.391934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.391959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.392134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.392159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.392326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.392353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.392481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.392508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.392680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.392705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.392875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.392910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.393077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.393105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.393265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.393290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.393455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.393482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.393653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.393680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.393824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.393849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.394018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.394060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.394183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.394211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.394407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.394431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.394578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.394605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.394804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.394831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.394989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.395014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.918 qpair failed and we were unable to recover it. 00:25:35.918 [2024-07-15 10:38:30.395166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.918 [2024-07-15 10:38:30.395191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.395378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.395404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.395580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.395605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.395741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.395768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.395952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.395981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.396151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.396179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.396351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.396378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.396535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.396562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.396698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.396723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.396840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.396866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.397018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.397043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.397190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.397215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.397337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.397380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.397547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.397574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.397738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.397762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.397942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.397971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.398130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.398157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.398298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.398322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.398438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.398462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.398596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.398624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.398791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.398815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.398937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.398979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.399135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.399163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.399329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.399353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.399514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.399541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.399675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.399702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.399874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.399904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.400049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.400077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.400245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.400272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.400405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.400430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.400626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.400654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.400839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.400868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.401011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.401036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.401237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.401265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.919 qpair failed and we were unable to recover it. 00:25:35.919 [2024-07-15 10:38:30.401428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.919 [2024-07-15 10:38:30.401456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.401629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.401653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2417460 Killed "${NVMF_APP[@]}" "$@" 00:25:35.920 [2024-07-15 10:38:30.401820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.401848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.402028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.402053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:35.920 [2024-07-15 10:38:30.402203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.402229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:35.920 [2024-07-15 10:38:30.402352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:35.920 [2024-07-15 10:38:30.402395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:35.920 [2024-07-15 10:38:30.402564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.402592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:35.920 [2024-07-15 10:38:30.402759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.402783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.402936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.402962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.403112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.403154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.403327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.403351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.403500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.403525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.403718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.403746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.403953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.403979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.404109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.404134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.404254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.404278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.404447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.404472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.404612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.404641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.404780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.404808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.404986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.405012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.405137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.405163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.405286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.405311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.405443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.405467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.405594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.405623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.405747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.405771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.405896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.405921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.406062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.406104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 [2024-07-15 10:38:30.406285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.406313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2418024 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:35.920 [2024-07-15 10:38:30.406510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.406535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2418024 00:25:35.920 [2024-07-15 10:38:30.406722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.406750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2418024 ']' 00:25:35.920 [2024-07-15 10:38:30.406903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.406933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:35.920 [2024-07-15 10:38:30.407084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.920 [2024-07-15 10:38:30.407112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.920 qpair failed and we were unable to recover it. 00:25:35.920 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.920 [2024-07-15 10:38:30.407266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.407309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:35.921 10:38:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:35.921 [2024-07-15 10:38:30.407448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.407477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.407632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.407657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.407830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.407859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.408034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.408059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.408204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.408229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.408358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.408384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.408504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.408529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.408702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.408727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.408873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.408908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.409071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.409099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.409242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.409267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.409458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.409485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.409671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.409699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.409847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.409873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.410049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.410076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.410216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.410244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.410411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.410435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.410552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.410593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.410755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.410782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.410961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.410988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.411107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.411150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.411327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.411352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.411498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.411523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.411689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.411718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.411910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.411938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.412086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.412111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.412234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.412259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.412432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.412460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.412630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.412654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.412816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.412843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.413019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.413044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.413198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.413223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.413385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.413413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.413537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.413564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.413755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.413780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.413953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.413982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.414171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.414198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.414329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.414354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.921 [2024-07-15 10:38:30.414499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.921 [2024-07-15 10:38:30.414524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.921 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.414649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.414673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.414794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.414819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.414954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.414996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.415151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.415179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.415338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.415363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.415509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.415550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.415735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.415763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.415929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.415955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.416088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.416113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.416280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.416308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.416489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.416514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.416637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.416663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.416812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.416837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.416986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.417011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.417179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.417206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.417365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.417393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.417569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.417594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.417756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.417783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.417913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.417955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.418102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.418126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.418293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.418322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.418490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.418518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.418656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.418680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.418812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.418837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.419004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.419029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.419144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.419169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.419317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.419358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.419514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.419541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.419716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.419746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.419913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.419957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.420155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.420180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.420303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.420329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.420521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.420549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.420699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.420726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.420887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.420915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.922 qpair failed and we were unable to recover it. 00:25:35.922 [2024-07-15 10:38:30.421086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.922 [2024-07-15 10:38:30.421111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.421302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.421338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.421507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.421531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.421653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.421693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.421856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.421895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.422045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.422070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.422222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.422247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.422452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.422479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.422646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.422672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.422816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.422857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.423062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.423088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.423232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.423257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.423373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.423398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.423542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.423569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.423768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.423793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.423973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.424000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.424121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.424147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.424290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.424315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.424486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.424511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.424673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.424699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.424856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.424887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.425016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.425041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.425158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.425200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.425359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.425383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.425495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.425520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.425691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.425716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.425857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.425901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.426046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.426071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.426225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.426251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.426400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.426425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.426550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.426575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.426773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.426798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.426972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.426998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.427146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.427171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.427318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.427343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.427516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.427540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.427664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.427689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.427812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.427836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.427994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.428021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.428194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.428219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.923 [2024-07-15 10:38:30.428366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.923 [2024-07-15 10:38:30.428390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.923 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.428537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.428562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.428681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.428706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.428823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.428848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.429026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.429052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.429175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.429200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.429351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.429376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.429494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.429519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.429668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.429694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.429828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.429852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.430003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.430028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.430151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.430176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.430326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.430351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.430473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.430498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.430646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.430671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.430817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.430842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.430998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.431023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.431170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.431195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.431313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.431338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.431486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.431511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.431635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.431660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.431800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.431829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.432013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.432039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.432155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.432180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.432324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.432348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.432509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.432534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.432679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.432705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.432814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.432838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.432995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.433021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.433152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.433177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.433301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.433325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.433477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.433501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.924 qpair failed and we were unable to recover it. 00:25:35.924 [2024-07-15 10:38:30.433620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.924 [2024-07-15 10:38:30.433645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.433786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.433811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.433955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.433981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.434103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.434128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.434302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.434327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.434455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.434479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.434655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.434680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.434823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.434848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.434992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.435017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.435141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.435165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.435340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.435364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.435509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.435534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.435651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.435675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.435846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.435870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.436045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.436069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.436196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.436220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.436394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.436419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.436571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.436596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.436715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.436739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.436892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.436918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.437069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.437094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.437240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.437265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.437409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.437434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.437580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.437605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.437728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.437752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.437889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.437915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.438059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.438084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.438251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.438276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.438420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.438444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.438592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.438616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.438765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.438794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.438936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.438962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.439114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.439139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.439290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.439315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.439435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.439459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.439604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.439628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.439745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.439770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.439922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.925 [2024-07-15 10:38:30.439947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.925 qpair failed and we were unable to recover it. 00:25:35.925 [2024-07-15 10:38:30.440066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.440091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.440243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.440268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.440386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.440411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.440584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.440608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.440762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.440787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.440931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.440956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.441106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.441131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.441259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.441284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.441428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.441452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.441600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.441625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.441750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.441776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.441900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.441926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.442073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.442099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.442241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.442265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.442384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.442409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.442580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.442605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.442721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.442746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.442866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.442903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.443048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.443073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.443194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.443224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.443355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.443380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.443496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.443522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.443637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.443662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.443812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.443837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.443972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.443997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.444145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.444169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.444321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.444346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.444486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.444511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.444627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.444652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.444828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.444852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.445004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.445030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.445220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.445244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.445398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.445423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.445552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.445577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.445687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.445712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.445851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.445882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.446001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.446026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.446178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.446203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.446325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.446350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.446494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.446518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.446633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.446657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.926 qpair failed and we were unable to recover it. 00:25:35.926 [2024-07-15 10:38:30.446799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.926 [2024-07-15 10:38:30.446824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.446938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.446964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.447114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.447139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.447284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.447309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.447426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.447451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.447573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.447598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.447734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.447759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.447882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.447907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.448052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.448077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.448227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.448252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.448392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.448417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.448562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.448586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.448702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.448728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.448899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.448938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.449089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.449114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.449261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.449286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.449434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.449459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.449580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.449605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.449752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.449776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.449901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.449931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.450055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.450080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.450194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.450219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.450388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.450413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.450557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.450582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.450722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.450747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.450892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.450918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.451039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.451064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.451213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.451238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.451385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.451410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.451560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.451584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.451734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.451759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.451885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.451911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.452052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.452077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.452202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.452226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.452383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.452408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.452523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.452548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.452696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.452720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.927 [2024-07-15 10:38:30.452848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.927 [2024-07-15 10:38:30.452872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.927 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.453012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.453037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.453167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.453192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.453338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.453363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.453507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.453532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.453673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.453697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.453874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.453911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.454036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.454061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.454174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.454199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.454315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.454344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.454484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.454509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.454631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.454657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.454804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.454828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.454948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.454974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.455105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.455130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.455302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.455327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.455446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.455471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.455593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.455618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.455743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.455768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.455777] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:35.928 [2024-07-15 10:38:30.455840] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.928 [2024-07-15 10:38:30.455915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.455940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.456090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.456114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.456232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.456255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.456411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.456436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.456586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.456610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.456752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.456776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.456929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.456955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.457100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.457125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.457263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.457288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.457409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.457434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.457577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.457602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.457763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.457788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.457916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.457941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.928 qpair failed and we were unable to recover it. 00:25:35.928 [2024-07-15 10:38:30.458092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.928 [2024-07-15 10:38:30.458117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.458263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.458287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.458433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.458458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.458578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.458607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.458725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.458750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.458901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.458927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.459052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.459077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.459251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.459276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.459424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.459449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.459590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.459614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.459739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.459764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.459908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.459934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.460077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.460101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.460241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.460266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.460408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.460433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.460577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.460602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.460743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.460768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.460922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.460947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.461087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.461112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.461237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.461262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.461414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.461440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.461583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.461608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.461727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.461752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.461863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.461917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.462048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.462073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.462220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.462244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.462372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.462397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.462542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.462567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.462714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.462738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.462887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.462913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.463062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.463087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.463213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.463238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.463383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.463407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.463557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.463582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.463724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.463748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.463899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.463925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.464072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.464097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.464245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.464270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.929 [2024-07-15 10:38:30.464383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.929 [2024-07-15 10:38:30.464408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.929 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.464549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.464573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.464725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.464750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.464895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.464920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.465039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.465064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.465232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.465257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.465377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.465407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.465571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.465596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.465722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.465747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.465865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.465895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.466039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.466064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.466212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.466236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.466356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.466382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.466529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.466554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.466739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.466764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.466892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.466918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.467066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.467090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.467240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.467264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.467434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.467458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.467607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.467633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.467784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.467809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.467960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.467985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.468134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.468159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.468279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.468304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.468479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.468504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.468672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.468697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.468848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.468873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.469031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.469056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.469232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.469257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.469401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.469425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.469539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.469563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.469707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.469731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.469859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.469900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.470055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.470083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.470205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.470229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.470378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.470404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.470544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.470569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.470717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.470742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.470869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.470900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.471054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.930 [2024-07-15 10:38:30.471079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.930 qpair failed and we were unable to recover it. 00:25:35.930 [2024-07-15 10:38:30.471225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.471250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.471379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.471404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.471537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.471562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.471707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.471732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.471874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.471906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.472049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.472074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.472215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.472240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.472415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.472455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.472617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.472643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.472819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.472845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.473029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.473056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.473189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.473216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.473393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.473419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.473590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.473617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.473776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.473802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.473969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.473996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.474136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.474162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.474291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.474317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.474463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.474489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.474617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.474645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.474797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.474826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.474986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.475011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.475127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.475152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.475315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.475340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.475496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.475521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.475641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.475668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.475849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.475882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.476011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.476038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.476188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.476214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.476358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.476384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.476531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.476556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.476720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.476745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.476867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.476898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.477050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.477076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.477255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.477281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.477410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.477436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.477610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.931 [2024-07-15 10:38:30.477636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.931 qpair failed and we were unable to recover it. 00:25:35.931 [2024-07-15 10:38:30.477787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.477814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.477948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.477973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.478112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.478137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.478290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.478314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.478499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.478524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.478672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.478697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.478901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.478928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.479050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.479076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.479191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.479216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.479372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.479397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.479572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.479603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.479742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.479782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.479942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.479970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.480125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.480151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.480267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.480292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.480443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.480469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.480639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.480665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.480820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.480847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.480978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.481005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.481130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.481155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.481299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.481324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.481446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.481472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.481602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.481627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.481768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.481793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.482995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.483026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.483184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.483211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.483334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.483359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.483512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.483537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.483652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.483677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.483829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.483854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.484017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.484043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.484197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.484222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.484370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.484396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.484545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.484570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.484727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.484753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.484930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.484957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.485097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.485123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.485269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.485295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.485442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.485468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.485614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.485640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.485788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.485813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.485975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.486001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.486152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.486177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.932 [2024-07-15 10:38:30.486299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.932 [2024-07-15 10:38:30.486325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.932 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.486471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.486497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.486616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.486641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.486819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.486845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.487002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.487028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.487147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.487172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.487295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.487321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.487465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.487490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.487623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.487647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.487793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.487818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.487947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.487974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.488124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.488150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.488306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.488331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.488505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.488531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.488685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.488710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.488888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.488914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.489078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.489103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.489284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.489309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.489438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.489463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.489582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.489607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.489745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.489771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.489924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.489951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.490111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.490136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.490262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.490288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.490441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.490466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.490626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.490650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.491413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.491442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.491572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.491598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.491779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.491806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.491971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.491997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.492116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.492142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.492328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.492362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.492510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.492536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.493512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.493543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.933 [2024-07-15 10:38:30.493682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.933 [2024-07-15 10:38:30.493708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.933 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.493859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.493904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.494057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.494083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.494238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.494263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.494443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.494469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.494600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.494625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.494778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.494803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.494976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.495002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.934 [2024-07-15 10:38:30.495165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.495198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.495326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.495351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.495496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.495521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.495675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.495711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.495893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.495919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.496040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.496065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.496226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.496251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.496384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.496411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.496587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.496612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.496762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.496790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.496949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.496975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.497126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.497152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.497270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.497295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.497415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.497440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.497617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.497642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.497800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.497825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.497957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.497982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.498139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.498164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.498322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.498348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.498457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.498482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.498602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.498632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.498814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.498839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.499013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.499038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.934 qpair failed and we were unable to recover it. 00:25:35.934 [2024-07-15 10:38:30.499188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.934 [2024-07-15 10:38:30.499213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.499339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.499366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.499491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.499516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.499687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.499712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.499874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.499904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.500032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.500057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.500204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.500229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.500379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.500403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.500546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.500572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.500730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.500755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.500894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.500920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.501074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.501099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.501254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.501280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.501426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.501451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.501600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.501625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.501769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.501793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.501974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.501999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.502124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.502149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.502302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.502329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.502474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.502499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.502670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.502695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.502844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.502869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.503036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.503061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.503175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.503201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.503352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.503377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.503501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.503526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.503700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.503724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.503842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.503884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.504035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.504060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.504210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.504235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.504384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.504409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.504522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.504548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.504699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.504724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.504844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.504869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.505006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.505031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.505180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.505205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.505327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.505352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.505478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.505503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.505618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.505647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.505794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.505819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.935 qpair failed and we were unable to recover it. 00:25:35.935 [2024-07-15 10:38:30.505977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.935 [2024-07-15 10:38:30.506004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.506144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.506169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.506316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.506341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.506468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.506493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.506642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.506667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.506785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.506810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.506928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.506954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.507107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.507133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.507276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.507301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.507448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.507473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.507609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.507634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.507783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.507808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.507940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.507967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.508078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.508103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.508262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.508287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.508420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.508446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.508586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.508611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.508757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.508790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.509687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.509718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.509889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.509916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.510042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.510068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.510239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.510264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.510393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.510419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.510566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.510591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.510711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.510737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.510862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.510898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.511049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.511074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.511229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.511254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:35.936 [2024-07-15 10:38:30.511376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.936 [2024-07-15 10:38:30.511400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:35.936 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.511548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.511575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.511726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.511753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.511902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.511928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.512061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.512086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.512223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.512248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.512397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.512422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.512539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.512564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.512718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.512743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.512872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.512903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.513042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.513067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.513229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.513270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.513458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.513485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.513637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.513663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.513787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.513814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.513945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.513972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.514147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.514173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.514324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.514349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.514468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.514494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.514626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.514652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.514805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.514833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.514964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.514990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.515110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.515134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.515263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.515287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.515440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.515468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.515591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.515616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.515758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.515783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.515953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.515979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.516894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.516924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.517060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.517086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.517801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.517830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.517975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.518001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.518674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.518703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.518906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.518932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.519056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.519082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.519205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.519231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.519400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.519425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.519543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.519572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.519729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.519775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.227 qpair failed and we were unable to recover it. 00:25:36.227 [2024-07-15 10:38:30.519946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.227 [2024-07-15 10:38:30.519974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.520095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.520122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.520270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.520297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.520470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.520496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.520641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.520668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.520822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.520848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.521021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.521047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.521236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.521262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.521413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.521439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.521592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.521617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.521745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.521771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.521923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.521950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.522098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.522130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.522285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.522312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.523086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.523116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.523275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.523302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.524192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.524223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.524383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.524411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.525070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.525101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.525264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.525291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.525468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.525494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.525641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.525667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.525804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.525829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.526004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.526031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.526185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.526211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.526369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.526395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.526655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.526680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.527596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.527637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.527868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.527903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.528034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.528062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.528203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.528228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.528388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.528415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.528551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.528578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.528717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.528743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.528889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.528921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.529066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.529092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.529217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.529244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.529380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.529405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.529570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.529598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.228 [2024-07-15 10:38:30.529610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.529757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.228 [2024-07-15 10:38:30.529786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.228 qpair failed and we were unable to recover it. 00:25:36.228 [2024-07-15 10:38:30.529969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.529996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.530127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.530153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.530319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.530344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.530563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.530589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.530714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.530742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.530870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.530902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.531128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.531154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.531270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.531297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.531419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.531445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.532197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.532240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.532456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.532484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.532639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.532666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.532838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.532865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.533056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.533082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.533242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.533268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.533422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.533447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.533574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.533600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.533763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.533788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.533999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.534026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.534189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.534216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.535148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.535197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.535451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.535478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.535619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.535646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.535805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.535831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.535969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.535996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.536154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.536190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.536319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.536344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.536475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.536501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.536671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.536697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.536822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.536848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.536976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.537002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.537180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.537206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.537402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.537427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.537580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.537605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.537752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.537778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.537898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.537926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.538077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.538103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.538236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.538263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.538386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.538427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.538659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.229 [2024-07-15 10:38:30.538685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.229 qpair failed and we were unable to recover it. 00:25:36.229 [2024-07-15 10:38:30.538817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.538844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.538981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.539009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.539158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.539198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.539347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.539380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.539508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.539534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.539721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.539748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.539901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.539945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.540088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.540117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.540309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.540335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.540485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.540511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.540635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.540661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.540836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.540862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.541001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.541031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.541182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.541208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.541341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.541367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.541489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.541515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.541739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.541765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.541931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.541958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.542107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.542133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.542298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.542324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.542502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.542528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.542643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.542669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.542817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.542843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.543007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.543034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.543158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.543184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.543313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.543339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.543471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.543498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.543625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.543652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.543828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.543854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.544629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.544657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.544874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.544907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.545063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.545090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.545265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.545291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.545462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.545488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.545646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.545673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.545795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.545821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.545957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.545986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.546139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.546173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.546331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.546357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.546541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.546567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.546742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.230 [2024-07-15 10:38:30.546768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.230 qpair failed and we were unable to recover it. 00:25:36.230 [2024-07-15 10:38:30.546899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.546927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.547086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.547112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.547270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.547296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.547447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.547473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.547597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.547623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.547795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.547821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.547961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.547987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.548108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.548134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.548269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.548296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.548451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.548477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.548627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.548653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.548806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.548835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.548971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.548999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.549122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.549148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.549274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.549299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.549429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.549455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.549601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.549627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.549769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.549811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.549962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.549990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.550149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.550185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.550314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.550340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.550492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.550518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.550636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.550662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.550786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.550813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.550969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.550995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.551125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.551151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.551310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.551339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.551490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.551517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.551669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.551695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.551818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.551843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.551974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.552001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.552122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.231 [2024-07-15 10:38:30.552148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.231 qpair failed and we were unable to recover it. 00:25:36.231 [2024-07-15 10:38:30.552273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.552299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.552486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.552511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.552659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.552684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.552833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.552859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.553001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.553028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.553182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.553208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.553346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.553374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.553510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.553536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.553681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.553707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.553870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.553930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.554089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.554115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.554271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.554298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.554420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.554445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.554575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.554600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.554746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.554771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.554896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.554923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.555055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.555081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.555204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.555229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.555377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.555402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.555550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.555575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.556559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.556589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.556749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.556775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.556939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.556965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.557091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.557117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.557264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.557290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.557411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.557437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.557607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.557633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.557754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.557791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.557919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.557945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.558097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.558122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.558280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.558306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.558435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.558460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.558612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.558644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.559383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.559416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.559603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.559629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.559786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.559812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.559955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.559981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.560115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.560141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.560334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.560358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.560510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.560536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.560712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.232 [2024-07-15 10:38:30.560737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.232 qpair failed and we were unable to recover it. 00:25:36.232 [2024-07-15 10:38:30.560868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.560901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.561046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.561071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.561195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.561220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.561341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.561366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.561484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.561509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.561676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.561717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.561918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.561947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.562079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.562105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.562248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.562274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.562390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.562415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.562558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.562584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.562710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.562737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.562890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.562916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.563039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.563064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.563192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.563218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.563329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.563355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.563527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.563552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.563669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.563694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.563819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.563844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.563996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.564027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.564152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.564177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.564358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.564383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.564501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.564526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.564654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.564681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.564829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.564855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.565006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.565037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.565156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.565191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.565337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.565363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.565505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.565531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.565685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.565712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.565836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.565862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.566030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.566056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.566184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.566210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.566379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.566405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.566523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.566550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.566674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.566700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.566817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.566842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.566984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.567024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.567194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.567222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.567373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.567403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.567538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.567564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.233 qpair failed and we were unable to recover it. 00:25:36.233 [2024-07-15 10:38:30.567709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.233 [2024-07-15 10:38:30.567736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.567919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.567945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.568067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.568093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.568282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.568308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.568433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.568458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.569239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.569269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.569460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.569486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.569613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.569638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.569759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.569784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.569930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.569957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.570108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.570133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.570261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.570287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.570409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.570434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.570558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.570583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.570702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.570727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.570848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.570881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.570999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.571025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.571145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.571170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.571322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.571347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.571505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.571530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.571655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.571681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.571831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.571856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.571992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.572017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.572189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.572214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.573007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.573036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.573192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.573218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.573354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.573379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.573555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.573580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.573694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.573720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.573850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.573883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.573995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.574021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.574171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.574206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.574354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.574380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.574564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.574589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.574719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.574744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.574865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.574901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.575025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.575051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.575179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.575207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.575325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.575350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.575469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.575494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.575661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.575686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.234 qpair failed and we were unable to recover it. 00:25:36.234 [2024-07-15 10:38:30.575804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.234 [2024-07-15 10:38:30.575830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.575959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.575985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.576101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.576127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.576279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.576304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.576458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.576483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.576600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.576626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.576766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.576791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.576939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.576981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.577115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.577142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.577291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.577317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.577456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.577485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.577633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.577660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.577804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.577830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.577957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.577984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.578113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.578140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.578306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.578332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.578483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.578509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.578643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.578673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.578821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.578846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.578978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.579004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.579129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.579155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.579280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.579305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.579468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.579493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.579638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.579663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.579785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.579810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.579935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.579961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.580082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.580107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.580253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.580278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.580400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.580425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.580562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.580588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.580748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.580773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.580897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.580923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.581057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.581097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.581226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.235 [2024-07-15 10:38:30.581253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.235 qpair failed and we were unable to recover it. 00:25:36.235 [2024-07-15 10:38:30.581401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.581427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.581546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.581573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.581726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.581752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.581865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.581903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.582028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.582054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.582195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.582220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.582372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.582398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.582519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.582545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.582694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.582719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.582868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.582900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.583049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.583075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.583192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.583218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.583344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.583370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.583526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.583551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.583710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.583735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.583854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.583884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.584014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.584039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.584163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.584192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.584312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.584337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.584480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.584509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.584660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.584687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.584815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.584842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.584963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.584990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.585126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.585152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.585285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.585311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.585433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.585460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.585585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.585611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.585759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.585786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.585928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.585968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.586102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.586129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.586296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.586321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.586439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.586463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.586589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.586614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.586732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.586757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.586874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.586906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.587026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.587051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.587169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.587199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.587330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.587359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.587475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.587501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.587634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.587660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.587810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.236 [2024-07-15 10:38:30.587836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.236 qpair failed and we were unable to recover it. 00:25:36.236 [2024-07-15 10:38:30.587992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.588019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.588142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.588168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.588296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.588322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.588445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.588482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.588634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.588660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.588808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.588835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.588961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.588988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.589107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.589132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.589268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.589295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.589442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.589469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.589630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.589655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.589784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.589810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.590017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.590044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.590164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.590201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.590350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.590376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.590564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.590589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.590707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.590733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.590860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.590898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.591026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.591052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.591177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.591203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.591319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.591345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.591467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.591493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.591637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.591663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.591800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.591827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.591986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.592031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.592170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.592215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.592340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.592367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.592517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.592543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.592658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.592686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.592820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.592845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.592985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.593013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.593132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.593158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.593272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.593298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.593411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.593437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.593584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.593611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.593729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.593754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.593940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.593967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.594094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.594119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.594252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.594279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.594459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.594486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.594923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.237 [2024-07-15 10:38:30.594952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.237 qpair failed and we were unable to recover it. 00:25:36.237 [2024-07-15 10:38:30.595090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.595118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.595259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.595288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.595675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.595716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.595854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.595886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.596018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.596046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.596170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.596202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.596342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.596368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.596514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.596540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.596667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.596694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.596829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.596855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.596997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.597036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.597185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.597225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.597372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.597403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.597558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.597583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.597730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.597756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.597870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.597906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.598029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.598055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.598175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.598201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.598364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.598389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.598520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.598548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.598670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.598700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.598893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.598921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.599053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.599079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.599235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.599265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.599442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.599469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.599597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.599623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.599772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.599799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.599933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.599960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.600094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.600120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.600533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.600558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.600707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.600733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.600864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.600901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.601031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.601057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.601186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.601212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.601364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.601392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.601529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.601556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.601682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.601709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.601886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.601912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.602044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.602070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.602196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.602222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.602378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.238 [2024-07-15 10:38:30.602404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.238 qpair failed and we were unable to recover it. 00:25:36.238 [2024-07-15 10:38:30.602557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.602583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.602701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.602727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.602844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.602869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.603005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.603031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.603174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.603200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.603355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.603380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.603524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.603553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.603684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.603711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.603848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.603881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.604019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.604046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.604171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.604208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.604355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.604381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.604528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.604553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.604677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.604702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.604815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.604840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.604993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.605034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.605165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.605194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.605347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.605373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.605526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.605551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.605703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.605728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.605850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.605882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.606012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.606038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.606162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.606194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.606317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.606345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.606520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.606546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.606662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.606688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.606833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.606859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.607007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.607034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.607153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.607178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.607338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.607363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.607485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.607510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.607631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.607657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.607771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.607798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.607927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.607954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.608078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.608105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.608229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.608255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.608406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.608432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.239 qpair failed and we were unable to recover it. 00:25:36.239 [2024-07-15 10:38:30.608562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.239 [2024-07-15 10:38:30.608588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.608764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.608789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.608917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.608943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.609058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.609083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.609260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.609285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.609410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.609438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.609555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.609581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.609708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.609735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.609892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.609920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.610064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.610105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.610239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.610273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.610396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.610422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.610570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.610601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.610789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.610814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.610951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.610977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.611092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.611117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.611280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.611305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.611451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.611478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.611620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.611645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.611789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.611830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.611982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.612010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.612142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.612170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.612330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.612356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.612531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.612556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.612706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.612731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.612851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.612888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.613017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.613044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.613174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.613201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.613328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.613353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.613504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.613529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.613700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.613725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.613848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.613882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.614004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.614029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.614169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.614198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.614346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.614371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.614520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.614546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.614699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.614725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.240 [2024-07-15 10:38:30.614871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.240 [2024-07-15 10:38:30.614902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.240 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.615025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.615052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.615189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.615238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.615401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.615429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.615551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.615577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.615751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.615777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.615897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.615924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.616048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.616074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.616199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.616225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.616371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.616396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.616521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.616546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.616671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.616698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.616813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.616839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.616970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.616996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.617113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.617139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.617256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.617288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.617427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.617453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.617598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.617623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.617770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.617796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.617940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.617980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.618106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.618134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.618290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.618315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.618467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.618493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.618607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.618633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.618783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.618809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.618945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.618972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.619098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.619124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.619239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.619264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.619438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.619463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.619589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.619615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.619747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.619787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.619945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.619972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.620095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.620120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.620282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.620307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.620427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.620452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.620579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.620604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.620756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.620783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.620945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.620971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.621092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.621118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.621267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.621293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.241 [2024-07-15 10:38:30.621409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.241 [2024-07-15 10:38:30.621435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.241 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.621567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.621607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.621751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.621795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.621931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.621959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.622108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.622133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.622263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.622288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.622434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.622460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.622635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.622663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.622788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.622814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.622944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.622972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.623127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.623153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.623269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.623295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.623438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.623464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.623575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.623600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.623726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.623751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.623883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.623909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.624046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.624073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.624262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.624302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.624460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.624487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.624638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.624665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.624813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.624839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.624966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.624992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.625113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.625139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.625254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.625282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.625428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.625453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.625586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.625614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.625765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.625792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.625933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.625959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.626110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.626136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.626253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.626284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.626436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.626462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.626583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.626610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.626759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.626785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.626954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.626994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.627150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.627177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.627355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.627382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.627529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.627555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.627675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.627701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.627887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.627913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.628036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.628062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.628210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.628236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.242 qpair failed and we were unable to recover it. 00:25:36.242 [2024-07-15 10:38:30.628353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.242 [2024-07-15 10:38:30.628378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.628526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.628552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.628720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.628759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.628895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.628923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.629049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.629076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.629228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.629253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.629428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.629454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.629601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.629627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.629743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.629768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.629912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.629938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.630057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.630084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.630227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.630253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.630386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.630411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.630557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.630582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.630692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.630719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.630847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.630893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.631026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.631053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.631182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.631209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.631361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.631387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.631534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.631559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.631712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.631741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe658000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.631905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.631932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.632082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.632108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.632256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.632282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.632437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.632463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.632615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.632640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.632793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.632820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.632975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.633001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.633119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.633145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.633331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.633357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.633471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.633497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.633609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.633634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.633782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.633808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.633960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.633988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.634113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.634138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.634297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.634323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.634472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.634499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.634681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.634707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.634827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.634852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.635038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.635064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.635188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.635215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.243 [2024-07-15 10:38:30.635380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.243 [2024-07-15 10:38:30.635406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.243 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.635580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.635605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.635754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.635780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.635934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.635960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.636104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.636130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.636307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.636332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.636450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.636475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.636620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.636645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.636794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.636821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.636976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.637002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.637149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.637175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.637349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.637375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.637500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.637525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.637649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.637676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.637822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.637852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.638007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.638034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.638154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.638191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.638333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.638358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.638506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.638532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.638676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.638701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.638826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.638850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.638982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.639009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.639151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.639176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.639330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.639355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.639500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.639526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.639653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.639680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.639834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.639859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.639984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.640009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.640162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.640187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.640303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.640329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.640471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.640497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.640610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.640635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.640761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.640786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.640909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.640935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.244 [2024-07-15 10:38:30.641076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.244 [2024-07-15 10:38:30.641103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.244 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.641222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.641247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.641366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.641392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.641539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.641564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.641707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.641732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.641887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.641913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.642065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.642091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.642262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.642288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.642436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.642460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.642588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.642614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.642739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.642764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.642912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.642951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.643112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.643138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.643254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.643279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.643402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.643426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.643562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.643586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.643729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.643754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.643890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.643916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.644042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.644068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.644209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.644234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.644380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.644411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.644587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.644612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.644756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.644781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.644935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.644961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.645093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.645119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.645266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.645291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.645411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.645436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.645585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.645610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.645752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.645777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.645930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.645955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.646078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.646103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.646249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.646274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.646419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.646443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.646587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.646612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.646767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.646792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.646934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.646972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.647126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.647153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.647279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.647304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.647450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.647475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.647593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.647618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.647737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.647762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.647915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.245 [2024-07-15 10:38:30.647941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.245 qpair failed and we were unable to recover it. 00:25:36.245 [2024-07-15 10:38:30.648107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.648132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.648280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.648305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.648456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.648483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.648604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.648629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.648746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.648771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.648933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.648959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.649092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.649118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.649234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.649259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.649376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.649401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.649519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.649543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.649673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.649697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.649816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.649828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.246 [2024-07-15 10:38:30.649841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.649859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.246 [2024-07-15 10:38:30.649874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.246 [2024-07-15 10:38:30.649894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.246 [2024-07-15 10:38:30.649904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.246 [2024-07-15 10:38:30.649999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.650024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 [2024-07-15 10:38:30.649979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.650148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.650176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.650174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:36.246 [2024-07-15 10:38:30.650262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:36.246 [2024-07-15 10:38:30.650266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:36.246 [2024-07-15 10:38:30.650324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.650349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.650484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.650509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.650634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.650659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.650794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.650821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.650954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.650981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.651108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.651133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.651281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.651306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.651448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.651473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.651586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.651611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.651735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.651762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.651900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.651926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.652044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.652069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.652245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.652271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.652392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.652417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.652534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.652559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.652705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.652735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.652863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.652895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.653014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.653039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.653152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.653176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.653324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.653349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.653472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.653497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.653620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.653644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.653781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.653806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.653928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.246 [2024-07-15 10:38:30.653955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.246 qpair failed and we were unable to recover it. 00:25:36.246 [2024-07-15 10:38:30.654068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.654093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.654242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.654267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.654462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.654487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.654606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.654630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.654780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.654805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.654977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.655003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.655125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.655150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.655267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.655292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.655405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.655431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.655560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.655585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.655733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.655758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.655874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.655904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.656016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.656041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.656186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.656211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.656339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.656363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.656515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.656539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.656657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.656682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.656810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.656835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.657018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.657044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.657167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.657193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.657315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.657340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.657496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.657521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.657645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.657670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.657800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.657824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.658014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.658040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.658182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.658208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.658325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.658350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.658517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.658542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.658660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.658685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.658801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.658826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.658970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.658996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.659118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.659143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.659257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.659289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.659445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.659469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.659589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.659613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.659753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.659778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.659896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.659922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.660042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.660066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.660189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.660214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.660330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.660355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.660541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.660566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.247 qpair failed and we were unable to recover it. 00:25:36.247 [2024-07-15 10:38:30.660694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.247 [2024-07-15 10:38:30.660719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.660843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.660888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.661013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.661039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.661161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.661186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.661337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.661362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.661484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.661510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.661625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.661650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.661815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.661857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.662031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.662058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.662188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.662215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.662334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.662360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.662476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.662501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.662621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.662648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.662761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.662787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.662917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.662942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.663064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.663089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.663221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.663246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.663368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.663392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.663503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.663534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.663648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.663673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.663822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.663847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.663966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.663992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.664104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.664129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.664256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.248 [2024-07-15 10:38:30.664280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.248 qpair failed and we were unable to recover it. 00:25:36.248 [2024-07-15 10:38:30.664389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.664413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.664554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.664579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.664704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.664729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.664848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.664873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.665003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.665027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.665141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.665167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.665337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.665361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.665470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.665495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.665617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.665642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.665768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.665793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.665950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.665989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.666120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.666146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.666358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.666384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.666530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.666555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.666700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.666725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.666841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.666889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.667024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.667049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.667180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.667207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.667323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.667348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.667499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.667525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.667674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.667699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.667825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.667854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.667990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.668016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.668140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.668170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.668292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.668318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.668437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.668464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.668594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.668619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.668743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.668768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.668952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.668978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.669096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.669121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.669263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.669288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.669437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.669462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.669576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.669601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.669742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.669767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.669906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.669945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.670090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.670116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.670271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.670297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.670422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.670447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.670561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.670586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.670708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.670732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.670896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.249 [2024-07-15 10:38:30.670922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.249 qpair failed and we were unable to recover it. 00:25:36.249 [2024-07-15 10:38:30.671051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.671076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.671187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.671211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.671337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.671362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.671506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.671531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.671651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.671676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.671919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.671944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.672081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.672106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.672266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.672295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.672436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.672461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.672635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.672660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.672777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.672801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.672925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.672951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.673094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.673119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.673232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.673257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.673385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.673410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.673538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.673563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.673715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.673739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.673857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.673898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.674014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.674039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.674163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.674188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.674311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.674336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.674480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.674505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.674618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.674643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.674750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.674775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.674890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.674916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.675071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.675098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.675250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.675276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.675390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.675415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.675531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.675556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.675703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.675728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.675886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.675912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.676031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.676057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.676200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.676225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.676377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.676402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.676521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.676546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.676705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.676730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.676944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.676970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.677086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.677111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.677245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.677270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.677388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.677413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.677552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.250 [2024-07-15 10:38:30.677577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.250 qpair failed and we were unable to recover it. 00:25:36.250 [2024-07-15 10:38:30.677695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.677720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.677841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.677866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.677985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.678010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.678136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.678164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.678290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.678315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.678435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.678461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.678578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.678603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.678725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.678750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.678903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.678929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.679047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.679072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.679191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.679218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.679353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.679378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.679495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.679520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.679639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.679664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.679819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.679844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.679969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.679995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.680138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.680163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.680279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.680304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.680420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.680445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.680555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.680579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.680697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.680722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.680857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.680908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.681038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.681065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.681197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.681222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.681354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.681382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.681544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.681569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.681712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.681737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.681871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.681903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.682051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.682076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.682186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.682211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.682359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.682384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.682504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.682529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.682679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.682704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.682837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.682883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.683007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.683040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.683257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.683283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.683433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.683459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.683581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.683605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.683749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.683774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.683900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.683927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.251 [2024-07-15 10:38:30.684073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.251 [2024-07-15 10:38:30.684098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.251 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.684220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.684245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.684392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.684417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.684532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.684557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.684701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.684726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.684838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.684862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.685019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.685044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.685169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.685194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.685331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.685356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.685469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.685493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.685621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.685646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.685758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.685783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.685904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.685929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.686054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.686079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.686200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.686225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.686350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.686374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.686554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.686579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.686691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.686717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.686891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.686917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.687043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.687068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.687205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.687244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.687401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.687433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.687554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.687580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.687699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.687724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.687864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.687897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.688018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.688044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.688171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.688196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.688341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.688366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.688516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.688542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.688655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.688681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.688799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.688824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.688962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.688989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.689113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.689140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.689310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.689335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.689454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.689479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.689628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.689652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.689802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.689827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.689943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.689968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.690098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.690123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.690270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.690295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.690474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.252 [2024-07-15 10:38:30.690499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.252 qpair failed and we were unable to recover it. 00:25:36.252 [2024-07-15 10:38:30.690612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.690639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.690766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.690791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.690926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.690952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.691082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.691108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.691260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.691285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.691432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.691457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.691579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.691605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.691725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.691750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.691906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.691932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.692042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.692068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.692185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.692210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.692356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.692380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.692525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.692550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.692665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.692690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.692814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.692841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.693001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.693027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.693144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.693169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.693337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.693362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.693513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.693538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.693655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.693680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.693788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.693814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.694015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.694054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.694218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.694246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.694391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.694417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.694574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.694605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.694728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.694754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.694912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.694938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.695063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.695089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.695213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.695239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.695364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.695389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.695508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.695534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.695657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.695681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.695799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.695824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.695941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.695967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.696094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.253 [2024-07-15 10:38:30.696127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.253 qpair failed and we were unable to recover it. 00:25:36.253 [2024-07-15 10:38:30.696253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.696278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.696389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.696414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.696536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.696563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.696711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.696737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.696890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.696916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.697051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.697076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.697200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.697225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.697345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.697370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.697487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.697512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.697628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.697652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.697776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.697801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.698011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.698037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.698158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.698182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.698328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.698353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.698468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.698492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.698615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.698639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.698784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.698809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.698931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.698957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.699107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.699132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.699248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.699273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.699410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.699435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.699567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.699591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.699718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.699743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.699871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.699902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.700020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.700045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.700158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.700183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.700301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.700330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.700468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.700493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.700610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.700634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.700773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.700798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.700944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.700970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.701082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.701107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.701232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.701257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.701377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.701402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.701511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.701535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.701651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.701675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.701800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.701825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.254 [2024-07-15 10:38:30.701975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.254 [2024-07-15 10:38:30.702001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.254 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.702117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.702142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.702264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.702288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.702413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.702438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.702588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.702612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.702726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.702751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.702897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.702923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.703041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.703066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.703206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.703231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.703342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.703367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.703512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.703536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.703651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.703676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.703820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.703845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.703981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.704007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.704152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.704177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.704300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.704324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.704455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.704480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.704607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.704632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.704754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.704779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.704921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.704946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.705071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.705096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.705219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.705244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.705382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.705407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.705527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.705553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.705693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.705718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.705840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.705865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.705993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.706018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.706132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.706157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.706281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.706306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.706459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.706483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.706605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.706634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.706759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.706783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.706904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.706929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.707055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.707080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.707202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.707227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.707345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.707370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.707482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.707507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.707649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.255 [2024-07-15 10:38:30.707674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.255 qpair failed and we were unable to recover it. 00:25:36.255 [2024-07-15 10:38:30.707791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.707816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.707933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.707959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.708102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.708126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.708249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.708274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.708399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.708424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.708538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.708563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.708710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.708735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.708849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.708874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.709037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.709062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.709175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.709200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.709309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.709333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.709467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.709492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.709624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.709649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.709773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.709798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.709926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.709953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.710068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.710094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.710207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.710231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.710352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.710376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.710489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.710514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.710630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.710659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.710788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.710813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.710933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.710959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.711079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.711103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.711214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.711239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.711389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.711414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.711533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.711558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.711674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.711699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.711821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.711846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.711974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.712000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.712180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.712205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.712353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.712378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.712503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.712528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.712646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.712671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.712788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.712813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.256 [2024-07-15 10:38:30.712968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.256 [2024-07-15 10:38:30.712994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.256 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.713114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.713138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.713254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.713278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.713396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.713421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.713544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.713568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.713696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.713721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.713882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.713908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.714030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.714055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.714178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.714208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.714336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.714361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.714477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.714503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.714627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.714652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.714776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.714801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.714929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.714955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.715071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.715097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.715206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.715231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.715354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.715380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.715505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.715530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.715654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.715679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.715820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.715846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.715989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.716014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.716126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.716151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.716362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.716387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.716506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.716531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.716647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.716671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.716785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.716809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.716926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.716956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.717068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.717093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.717235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.717260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.717376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.717401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.717516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.717541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.717686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.717710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.717822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.717847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.717968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.717993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.257 qpair failed and we were unable to recover it. 00:25:36.257 [2024-07-15 10:38:30.718111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.257 [2024-07-15 10:38:30.718137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.718256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.718282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.718396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.718421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.718562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.718587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.718732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.718756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.718885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.718911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.719038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.719063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.719191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.719216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.719339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.719364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.719515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.719540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.719657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.719681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.719791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.719816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.719928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.719954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.720097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.720122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.720238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.720263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.720412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.720437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.720563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.720588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.720702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.720726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.720855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.720884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.721000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.721029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.721155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.721180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.721324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.721349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.721468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.721493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.721606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.721633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.721755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.721780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.721920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.721945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.722064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.722090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.722233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.722257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.722368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.722393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.722504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.722528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.722674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.722699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.722807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.722832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.722948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.722973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.723102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.723128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.258 [2024-07-15 10:38:30.723285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.258 [2024-07-15 10:38:30.723310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.258 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.723428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.723453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.723576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.723601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.723710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.723735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.723851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.723895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.724020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.724046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.724162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.724187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.724312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.724337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.724466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.724491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.724605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.724630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.724756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.724781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.724898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.724924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.725037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.725062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.725178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.725203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.725314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.725339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.725463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.725488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.725613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.725638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.725752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.725777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.725940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.725966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.726081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.726106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.726222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.726247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.726365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.726390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.726534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.726559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.726672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.726696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.726830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.726855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.726985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.727010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.727165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.727198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.727318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.727343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.727492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.727517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.727635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.727660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.727783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.727808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.727951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.727977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.728124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.728149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.728306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.728331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.728489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.728514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.728636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.728661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.728778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.728802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.728935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.728961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.729081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.729107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.729229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.729255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.729376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.729401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.729547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.259 [2024-07-15 10:38:30.729572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.259 qpair failed and we were unable to recover it. 00:25:36.259 [2024-07-15 10:38:30.729696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.729721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.729844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.729870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.730020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.730045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.730212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.730237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.730358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.730383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.730534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.730560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.730689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.730714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.730834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.730859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.731026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.731051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.731173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.731198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.731316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.731340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.731452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.731477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.731631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.731657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.731771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.731796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.731950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.731977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.732091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.732116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.732235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.732261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.732409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.732434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.732611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.732636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.732753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.732778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.732915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.732941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.733060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.733085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.733207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.733233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.733384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.733409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.733583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.733608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.733728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.733753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.733904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.733932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.734052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.734078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.734219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.734244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.734413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.734438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.734559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.734584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.734729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.734754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.734867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.734897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.735031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.735057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.735201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.735226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.735342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.735367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.735479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.735503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.735616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.735641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.735765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.735790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.735937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.735965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.736074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.260 [2024-07-15 10:38:30.736099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.260 qpair failed and we were unable to recover it. 00:25:36.260 [2024-07-15 10:38:30.736227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.736252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.736378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.736403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.736539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.736565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.736677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.736702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.736814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.736838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.736980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.737006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.737125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.737151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.737304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.737329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.737438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.737463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.737608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.737634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.737753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.737778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.737930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.737960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.738079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.738105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.738270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.738295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.738445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.738471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.738584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.738610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.738729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.738754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.738905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.738937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.739061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.739086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.739207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.739232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.739361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.739386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.739531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.739556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.739683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.739708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.739829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.739856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.740025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.740050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.740177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.740208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.740327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.740352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.740469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.740495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.740614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.740639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.740784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.740809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.740931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.740956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.741131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.741156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.741273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.741300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.741448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.741473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.741592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.741617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.741731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.741756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.741882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.741908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.742067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.742092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.742213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.742238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.742357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.742382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.742523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.742547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.261 [2024-07-15 10:38:30.742666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.261 [2024-07-15 10:38:30.742692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.261 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.742837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.742862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.742995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.743020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.743160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.743185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.743294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.743319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.743465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.743489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.743632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.743657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.743780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.743804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.743924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.743952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.744064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.744089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.744239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.744264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.744386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.744415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.744569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.744594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.744715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.744739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.744859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.744897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.745052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.745077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.745201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.745226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.745367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.745392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.745513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.745538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.745709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.745734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.745861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.745891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.746019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.746043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.746165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.746191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.746324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.746349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.746474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.746499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.746625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.746649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.746789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.746814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.746934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.746960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.747085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.747110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.747253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.747277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.747429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.747454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.747568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.747594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.747710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.747735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.747895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.747921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.748038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.262 [2024-07-15 10:38:30.748063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.262 qpair failed and we were unable to recover it. 00:25:36.262 [2024-07-15 10:38:30.748194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.748219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.748337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.748362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.748506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.748531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.748648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.748677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.748797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.748822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.748941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.748966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.749115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.749140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.749268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.749292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.749434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.749458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.749606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.749631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.749770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.749795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.749944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.749969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.750088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.750113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.750261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.750286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.750413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.750438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.750550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.750574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.750685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.750710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.750887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.750913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.751030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.751055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.751169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.751194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.751307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.751332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.751479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.751503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.751629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.751656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.751778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.751803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.751929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.751955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.752076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.752101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.752252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.752277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.752449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.752474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.752621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.752646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.752791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.752816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.752928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.752957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.753083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.753108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.753224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.753249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.753421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.753446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.753587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.753612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.753724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.753749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.753867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.753898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.754035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.754060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.754184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.754209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.754362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.754387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.754509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.754534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.263 [2024-07-15 10:38:30.754643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.263 [2024-07-15 10:38:30.754668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.263 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.754795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.754820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.754950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.754976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.755085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.755114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.755288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.755312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.755454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.755479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.755612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.755637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.755755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.755780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.755926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.755952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.756074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.756099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.756214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.756239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.756380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.756405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.756517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.756542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.756686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.756711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.756821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.756845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.756974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.757000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.757126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.757151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.757312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.757337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.757489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.757515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.757656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.757681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.757800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.757826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.757944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.757970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.758077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.758101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.758283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.758308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.758428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.758453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.758607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.758631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.758762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.758787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.758908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.758936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.759063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.759088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.759208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.759233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.759381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.759412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.759526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.759551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.759668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.759693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.759874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.759905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.760021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.760046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.760166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.760191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.760323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.760348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.760494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.760518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.760664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.760689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.760840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.760865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.761032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.761057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.761189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.264 [2024-07-15 10:38:30.761214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.264 qpair failed and we were unable to recover it. 00:25:36.264 [2024-07-15 10:38:30.761334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.761358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.761473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.761498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.761624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.761649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.761797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.761822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.761940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.761966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.762087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.762112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.762223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.762248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.762388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.762413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.762558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.762582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.762706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.762731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.762843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.762868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.763046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.763071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.763186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.763211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.763331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.763356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.763506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.763531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.763664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.763688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.763815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.763840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.763990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.764016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.764130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.764154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.764294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.764319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.764435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.764460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.764575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.764600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.764749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.764774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.764903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.764928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.765049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.765074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.765193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.765218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.765362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.765387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.765517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.765542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.765663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.765687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.765824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.765853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.765982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.766007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.766127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.766152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.766309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.766334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.766479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.766504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.766639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.766663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.766787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.766811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.766931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.766957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.767070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.767095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.767225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.767250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.767368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.767393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.767517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.767542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.265 [2024-07-15 10:38:30.767677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.265 [2024-07-15 10:38:30.767701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.265 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.767823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.767847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.767985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.768010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.768134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.768159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.768285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.768310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.768433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.768459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.768580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.768605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.768715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.768740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.768886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.768911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.769037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.769062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.769180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.769205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.769318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.769343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.769488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.769513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.769660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.769684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.769803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.769828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.769958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.769987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.770137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.770162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.770298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.770322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.770437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.770462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.770585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.770610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.770724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.770749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.770898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.770934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.771046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.771071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.771222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.771247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.771365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.771390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.771516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.771541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.771655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.771680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.771793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.771818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.771949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.771975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.772112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.772154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.772350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.772389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.772554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.772581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.772700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.772726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.772842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.772867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.773005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.773030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.266 qpair failed and we were unable to recover it. 00:25:36.266 [2024-07-15 10:38:30.773159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.266 [2024-07-15 10:38:30.773193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.773349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.773376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.773525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.773550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.773662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.773688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.773808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.773832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.773962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.773987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.774132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.774157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.774281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.774306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.774454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.774479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.774607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.774634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.774785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.774810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.774943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.774969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.775117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.775142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.775271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.775296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.775447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.775472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.775582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.775608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.775732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.775757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.775884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.775910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.776019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.776044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.776155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.776179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.776306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.776331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.776487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.776514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.776655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.776680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.776805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.776832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.776965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.776991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.777111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.777136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.777270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.777295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.777440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.777466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.777588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.777613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.777737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.777762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.777884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.777910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.778035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.778060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.778177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.778202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.778347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.778372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.778486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.778515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.778627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.778652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.778784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.778809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.778981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.779021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.779144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.779170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.779321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.779347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.779485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.779512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.267 [2024-07-15 10:38:30.779636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.267 [2024-07-15 10:38:30.779664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.267 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.779843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.779871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.780003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.780029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.780148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.780172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.780313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.780338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.780456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.780481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.780592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.780616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.780743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.780768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.780895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.780930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.781046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.781070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.781180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.781204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.781325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.781349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.781479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.781504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.781637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.781665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.781785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.781810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.781943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.781969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.782097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.782122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.782253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.782280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.782426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.782451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.782564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.782589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.782717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.782745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.782857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.782887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.782997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.783021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.783167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.783191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.783310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.783334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.783465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.783490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.783634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.783658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.783774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.783799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.783926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.783951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.784106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.784131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.784288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.784313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.784442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.784466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.784639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.784664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.784777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.784802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.784945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.784985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.785137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.785164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.785294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.785322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.785449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.785475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.785625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.785651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.785766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.785792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.785929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.785956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.268 [2024-07-15 10:38:30.786076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.268 [2024-07-15 10:38:30.786101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.268 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.786273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.786298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.786429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.786454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.786578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.786602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.786723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.786748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.786900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.786925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.787050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.787074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.787223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.787248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.787368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.787393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.787539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.787563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.787706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.787731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.787891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.787941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.788062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.788088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.788231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.788257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.788405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.788430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.788577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.788603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.788731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.788756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.788915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.788942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.789062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.789087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.789214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.789239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.789372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.789397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.789571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.789596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.789721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.789746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.789867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.789902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.790032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.790057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.790178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.790203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.790351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.790377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.790498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.790523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.790642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.790667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.790791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.790817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.790947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.790972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.791100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.791125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.791256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.791281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.791411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.791436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.791564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.791589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.791732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.791757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.791889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.791916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.792033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.792058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.792183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.792208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.792323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.792348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.792461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.792486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.269 [2024-07-15 10:38:30.792630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.269 [2024-07-15 10:38:30.792654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.269 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.792777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.792802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.792916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.792941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.793064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.793092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.793241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.793267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.793414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.793441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.793594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.793620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.793743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.793768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.793939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.793964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.794077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.794102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.794252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.794276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.794399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.794423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.794536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.794561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.794719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.794743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.794855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.794884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.794995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.795020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.795157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.795182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.795304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.795329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.795446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.795473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.795594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.795620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.795751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.795778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.795925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.795951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.796063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.796089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.796216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.796243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.796360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.796386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.796529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.796554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.796669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.796694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.796818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.796843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.796981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.797006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.797132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.797157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.797291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.797316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.797487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.797512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.797634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.797659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.797773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.797802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.797921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.797947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.798071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.798096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.798242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.798267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.798390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.798415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.270 [2024-07-15 10:38:30.798527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.270 [2024-07-15 10:38:30.798551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.270 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.798665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.798690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.798801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.798826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.798980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.799006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.799112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.799137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.799285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.799312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.799466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.799491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.799607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.799631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.799764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.799789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.799919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.799945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.800075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.800100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.800246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.800270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.800385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.800410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.800555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.800579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.800691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.800716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.800825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.800850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.801013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.801052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.801202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.801229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.801386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.801412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.801556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.801582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.801692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.801717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.801835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.801860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.801980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.802010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.802137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.802162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.802290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.802315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.271 [2024-07-15 10:38:30.802458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.271 [2024-07-15 10:38:30.802483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.271 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.802602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.802627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.802772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.802797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.802937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.802962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.803090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.803115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.803232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.803256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.803378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.803403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.803525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.803550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.803688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.803713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.803859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.803894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.804043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.804068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.804204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.804229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.804361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.804385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.804530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.804554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.804681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.804705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.804848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.804894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.805051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.805079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.805212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.805239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.805356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.805383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.805556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.805582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.805720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.805746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.805874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.805906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.806025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.806050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.806164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.806189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.806301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.806330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.806474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.806499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.806622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.806647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.806774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.806798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.806920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.806946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.807122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.807147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.807296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.807320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.807438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.807463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.807588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.807613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.807751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.807775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.807897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.807922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.808038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.808063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.808179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.808203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.808347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.808371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.272 qpair failed and we were unable to recover it. 00:25:36.272 [2024-07-15 10:38:30.808491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.272 [2024-07-15 10:38:30.808515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.808656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.808695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.808816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.808842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.808967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.808994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.809121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.809147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.809270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.809295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.809443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.809469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.809644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.809669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.809811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.809835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.809957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.809982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.810106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.810131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.810275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.810300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.810448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.810473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.810602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.810634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.810760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.810786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.810939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.810966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.811116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.811142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.811262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.811287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.811411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.811436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.811556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.811581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.811707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.811733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.811861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.811891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.812007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.812032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.812176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.812200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.812351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.812376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.812532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.812558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.812709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.812736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.812862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.812895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.813042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.813068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.813219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.813244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.813358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.813384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.813500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.813526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.813682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.813706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.813827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.813851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.813999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.814025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.814171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.814196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.814311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.814336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.814451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.814476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.273 [2024-07-15 10:38:30.814588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.273 [2024-07-15 10:38:30.814613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.273 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.814737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.814762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.814910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.814940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.815053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.815078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.815227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.815252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.815367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.815392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.815546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.815571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.815695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.815719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.815883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.815923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.816048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.816075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.816201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.816227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.816347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.816373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.816516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.816542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.816697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.816722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.816868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.816899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.817019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.817044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.817200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.817225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.817371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.817396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.817541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.817565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.817679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.817704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.817861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.817908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.818064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.818091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.818244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.818269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.818394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.818419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.818567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.818592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.818733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.818758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.818899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.818937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.819094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.819120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.819256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.819282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.819407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.819437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.819585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.819610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.819739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.819765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.819897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.819923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.820037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.820063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.820181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.820206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.820315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.820339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.820460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.820484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.820608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.820633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.820748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.820772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.820927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-07-15 10:38:30.820953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.274 qpair failed and we were unable to recover it. 00:25:36.274 [2024-07-15 10:38:30.821069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.821094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.821208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.821233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.821372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.821397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.821525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.821550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.821670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.821695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.821837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.821862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.821983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.822008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.822129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.822153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.822269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.822293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.822418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.822442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.822588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.822613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.822724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.822749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.822908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.822933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.823056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.823081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.823201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.823226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.823339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.823364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.823505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.823530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.823675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.823700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.823820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.823845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.823969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.823995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.824120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.824145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.824291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.824315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.824432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.824457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.824607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.824632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.824784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.824809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.824959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.824985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.825124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.825149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.825294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.825318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.825443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.825468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.825611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.825636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.275 qpair failed and we were unable to recover it. 00:25:36.275 [2024-07-15 10:38:30.825785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.275 [2024-07-15 10:38:30.825814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.825937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.825962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.826116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.826141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.826257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.826282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.826394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.826418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.826556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.826581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.826704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.826731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.826892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.826932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.827063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.827089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.827208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.827234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.827386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.827414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.827536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.827562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.827674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.827700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.827825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.827851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.827999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.828038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.828165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.828191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.828320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.828345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.828468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.828494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.828642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.828668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.828820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.828847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.828997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.829024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.829164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.829189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.829310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.829335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.829462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.829487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.829632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.829657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.829773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.829799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.829946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.829973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.830114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.830140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.830295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.830321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.830470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.830496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.830644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.830670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.830819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.830844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.830987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.831014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.831145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.831171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.831319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.831346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.831462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.831488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.831617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.831642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.276 qpair failed and we were unable to recover it. 00:25:36.276 [2024-07-15 10:38:30.831768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.276 [2024-07-15 10:38:30.831794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.831935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.831962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.832079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.832104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.832249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.832279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.832425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.832450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.832574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.832599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.832753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.832782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.832932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.832959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.833091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.833116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.833232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.833257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.833379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.833405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.833517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.833543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.833669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.833694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.833816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.833841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.833968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.833995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.834106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.834132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.834280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.834305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.834468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.834494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.834618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.834647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.834797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.834822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.834959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.834986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.835135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.835161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.835287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.835312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.835436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.835462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.835592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.835619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.835745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.835771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.835893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.835921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.836043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.836069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.836194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.836221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.836368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.836394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.836560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.836587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.836709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.836734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.836862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.836898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.837046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.837071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.837203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.837228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.837408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.837432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.837551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.837579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.837718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.837743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.837865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.277 [2024-07-15 10:38:30.837899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.277 qpair failed and we were unable to recover it. 00:25:36.277 [2024-07-15 10:38:30.838031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.838057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.838188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.838213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.838336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.838361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.838502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.838528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.838645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.838675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.838797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.838823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.838958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.838986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.839109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.839134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.839258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.839285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.839431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.839457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.839580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.839606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.839739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.839764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.839896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.839923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.840071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.840096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.840214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.840241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.840358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.840383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.840546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.840571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.840694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.840719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.840899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.840939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.841106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.841133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.841251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.841277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.841429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.841456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.841605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.841631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.841748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.841774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.841935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.841963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.842108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.842133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.842279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.842304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.842425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.842449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.842600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.842625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.842770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.842795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.842913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.842939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.843081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.843119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.843246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.843272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.843388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.843413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.843524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.843549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.843666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.843691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.843804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.843829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.843953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.843980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.844095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.844120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.844246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.844270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.844398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.278 [2024-07-15 10:38:30.844423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.278 qpair failed and we were unable to recover it. 00:25:36.278 [2024-07-15 10:38:30.844551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.844576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.844699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.844728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.844886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.844914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.845038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.845064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.845183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.845208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.845360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.845385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.845519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.845544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.845661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.845688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.845806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.845830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.845988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.846016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.846135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.279 [2024-07-15 10:38:30.846160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.279 qpair failed and we were unable to recover it. 00:25:36.279 [2024-07-15 10:38:30.846290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.846315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.846461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.846486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.846607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.846634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.846751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.846777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.846891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.846916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.847091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.847117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.847256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.847281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.847389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.847414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.847542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.847569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.847689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.847714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.847865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.847897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.848047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.848072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.848186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.848211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.848366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.848391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.848511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.848536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.848680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.848705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.848831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.848856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.849007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.849033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.849165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.849191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.849307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.849333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.849492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.849517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.849641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.849665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.849793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.849818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.849947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.849975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.850096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.850121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.850267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.850292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.850413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.850437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.850552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.850577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.850750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.850775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.850901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.850926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.851058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.851083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.851209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.851233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.851359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.851384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.851501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.851531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.851661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.851686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.851843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.851888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.852048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.852075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.852197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.852222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.852378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.545 [2024-07-15 10:38:30.852404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.545 qpair failed and we were unable to recover it. 00:25:36.545 [2024-07-15 10:38:30.852553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.852578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.852693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.852718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.852842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.852867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.853021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.853047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.853175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.853199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.853341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.853365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.853488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.853513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.853669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.853693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.853823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.853849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.853986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.854012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.854142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.854169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.854313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.854338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.854462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.854488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.854611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.854636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.854768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.854793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.854966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.854992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.855103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.855128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.855249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.855276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.855423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.855448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.855560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.855585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.855727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.855752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.855921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.855960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.856092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.856119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.856261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.856286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.856427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.856452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.856581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.856607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.856737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.856762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.856916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.856944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.857065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.857091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.857226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.857251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.857381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.857407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.857529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.857555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.857692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.857731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.857888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.857915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.858034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.858064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.858216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.858241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.858356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.858381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.858495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.858520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.858632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.858657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.858806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.858835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.546 [2024-07-15 10:38:30.858987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.546 [2024-07-15 10:38:30.859013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.546 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.859128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.859154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.859291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.859317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.859431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.859456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.859630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.859656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.859818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.859844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.859968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.859994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.860116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.860141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.860266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.860291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.860429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.860454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.860596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.860620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.860736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.860761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.860875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.860905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.861031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.861056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.861201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.861226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.861338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.861362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.861512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.861537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.861648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.861673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.861829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.861853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.861975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.862001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.862148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.862176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.862299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.862325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.862445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.862471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.862591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.862618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.862765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.862791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.862919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.862946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.863101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.863127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.863251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.863277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.863395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.863420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.863551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.863578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.863702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.863727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.863854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.863884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.864013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.864038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.864159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.864184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.864362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.864387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.864512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.864539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.864658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.864684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.864831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.864857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.865028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.865056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.865207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.865232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.865379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.865404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.547 [2024-07-15 10:38:30.865556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.547 [2024-07-15 10:38:30.865580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.547 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.865721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.865746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.865880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.865920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.866046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.866072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.866219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.866244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.866355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.866380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.866528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.866553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.866666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.866696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.866827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.866853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.866977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.867003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.867124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.867150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.867293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.867318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.867430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.867454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.867572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.867597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.867718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.867742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.867855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.867885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.868041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.868066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.868193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.868217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.868325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.868350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.868504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.868529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.868643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.868671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.868805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.868830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.868983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.869009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.869133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.869158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.869310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.869335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.869454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.869479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.869608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.869633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.869750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.869775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.869906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.869932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.870091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.870116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.870231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.870256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.870372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.870397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.870506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.870531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.870647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.870671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.870786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.870815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.870951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.870979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.871116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.871142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.871260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.871285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.871427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.871452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.871565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.871591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.871739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.871764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.548 [2024-07-15 10:38:30.871922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.548 [2024-07-15 10:38:30.871947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.548 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.872118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.872157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.872283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.872310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.872455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.872480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.872594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.872620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.872741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.872768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.872899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.872926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.873064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.873090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.873247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.873272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.873384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.873408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.873529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.873555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.873674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.873700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.873825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.873851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.873973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.874000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.874125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.874150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.874300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.874325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.874477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.874502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.874626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.874651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.874762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.874787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.874923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.874963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.875087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.875123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.875277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.875302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.875426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.875451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.875570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.875595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.875719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.875745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.875862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.875896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.876013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.876038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.876208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.876234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.876357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.876382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.876514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.876539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.876668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.876694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.876838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.876884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.877053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.549 [2024-07-15 10:38:30.877079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.549 qpair failed and we were unable to recover it. 00:25:36.549 [2024-07-15 10:38:30.877199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.877224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.877356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.877381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.877505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.877532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.877651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.877678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.877835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.877863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.878012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.878037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.878162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.878189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.878300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.878326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.878477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.878502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.878627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.878653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.878796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.878821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.878957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.878983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.879134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.879160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.879289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.879314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.879434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.879459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.879598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.879623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.879783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.879811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.879933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.879959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.880079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.880104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.880246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.880271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.880387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.880412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.880566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.880591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.880710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.880736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.880857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.880888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.881016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.881041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.881181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.881206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.881344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.881369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.881486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.881511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.881629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.881656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.881784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.881809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.881935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.881962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.882113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.882138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.882289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.882315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.882469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.882495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.882652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.882678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.882827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.882852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.882976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.883002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.883142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.883167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.883288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.883313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.883437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.883461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.883582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.550 [2024-07-15 10:38:30.883609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.550 qpair failed and we were unable to recover it. 00:25:36.550 [2024-07-15 10:38:30.883734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.883759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.883881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.883907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.884039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.884065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.884176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.884202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.884330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.884354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.884472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.884498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.884617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.884642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.884771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.884796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.884909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.884935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.885063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.885088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.885224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.885249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.885401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.885426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.885568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.885593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.885714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.885738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.885870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.885901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.886019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.886044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.886164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.886189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.886330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.886354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.886479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.886504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.886618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.886643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.886763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.886787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.886911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.886937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.887063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.887088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.887229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.887254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.887370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.887394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.887514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.887540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.887663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.887690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.887837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.887862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.888003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.888028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.888141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.888168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.888280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.888305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.888426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.888452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.888563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.888588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.888731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.888756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.888911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.888951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.889100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.889127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.889248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.889273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.889388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.889413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.889531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.889559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.889684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.889710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.889827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.551 [2024-07-15 10:38:30.889858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.551 qpair failed and we were unable to recover it. 00:25:36.551 [2024-07-15 10:38:30.890056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.890094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.890222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.890248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.890373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.890398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.890513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.890538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.890680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.890705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.890820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.890844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.890962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.890990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.891116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.891141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.891268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.891293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.891443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.891468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.891575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.891600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.891720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.891746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.891868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.891902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.892026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.892051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.892186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.892210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.892356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.892381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.892499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.892523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.892664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.892688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.892796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.892821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.892962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.892987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.893115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.893141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.893261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.893286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.893402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.893427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.893574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.893599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.893738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.893763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.893886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.893912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.894022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.894051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.894180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.894205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.894336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.894361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.894508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.894533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.894650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.894675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.894827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.894853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.894983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.895009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.895121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.895146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.895291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.895316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.895435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.895460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.895585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.895625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.895774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.895801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.895929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.895956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.896083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.896108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.896242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.552 [2024-07-15 10:38:30.896268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.552 qpair failed and we were unable to recover it. 00:25:36.552 [2024-07-15 10:38:30.896394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.896420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.896571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.896597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.896711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.896737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.896908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.896947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.897104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.897130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.897261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.897288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.897431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.897456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.897605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.897629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.897748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.897774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.897964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.898003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.898136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.898174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.898306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.898333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.898451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.898482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.898631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.898656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.898777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.898802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.898920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.898946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.899058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.899082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.899190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.899215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.899340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.899367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.899503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.899527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.899640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.899665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.899790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.899814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.899959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.899985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.900128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.900153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.900273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.900298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.900447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.900472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.900600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.900625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.900755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.900780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.900926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.900951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.901079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.901104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.901220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.901244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.901365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.901390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.901512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.901537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.901665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.901690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.553 [2024-07-15 10:38:30.901798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.553 [2024-07-15 10:38:30.901823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.553 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.901969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.901994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.902127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.902166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.902289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.902315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.902443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.902468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.902582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.902612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.902785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.902810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.902930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.902957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.903078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.903103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.903219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.903245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.903365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.903390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.903535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.903562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.903711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.903736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.903864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.903896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.904023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.904047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.904178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.904203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.904322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.904347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.904497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.904523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.904671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.904696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.904821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.904847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.904974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.904999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.905114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.905139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.905257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.905284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.905411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.905437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.905562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.905587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.905730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.905755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.905906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.905931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.906041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.906066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.906210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.906235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.906385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.906410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.906526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.906551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.906695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.906719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.906851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.906886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.907004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.907029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.907172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.907197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.907320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.907345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.907468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.907493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.907624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.907649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.907790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.907814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.907958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.907984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.908098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.908124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.554 [2024-07-15 10:38:30.908239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.554 [2024-07-15 10:38:30.908266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.554 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.908377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.908402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.908553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.908578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.908728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.908753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.908885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.908911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.909025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.909050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.909170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.909195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.909317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.909342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.909494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.909519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.909668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.909693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.909811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.909836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.909965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.909991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.910109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.910133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.910245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.910270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.910389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.910416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.910545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.910570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.910698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.910723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.910836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.910861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.911022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.911047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.911164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.911191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.911311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.911336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.911485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.911510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.911628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.911654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.911766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.911791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.911918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.911958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.912107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.912133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.912259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.912284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.912396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.912422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.912569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.912594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.912747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.912772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.912887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.912914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.913039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.913071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.913212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.913237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.913384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.913409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.913534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.913558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.913702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.913727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.913852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.913886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.914009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.914034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.914184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.914209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.914348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.914372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.914526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.914551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.555 [2024-07-15 10:38:30.914691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.555 [2024-07-15 10:38:30.914716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.555 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.914833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.914859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.914984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.915009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.915157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.915182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.915330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.915355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.915478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.915503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.915628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.915653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.915777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.915804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc94200 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.915940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.915980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.916115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.916142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.916273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.916299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.916429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.916454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.916604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.916630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe64c000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.916755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.916782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.916904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.916932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.917047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.917072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.917222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.917248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.917374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.917404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.917528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.917553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.917702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.917726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.917849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.917874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.917995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.918021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.918169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.918194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.918338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.918363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.918490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.918517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 [2024-07-15 10:38:30.918634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.918660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe654000b90 with addr=10.0.0.2, port=4420 00:25:36.556 qpair failed and we were unable to recover it. 00:25:36.556 A controller has encountered a failure and is being reset. 00:25:36.556 [2024-07-15 10:38:30.918865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.556 [2024-07-15 10:38:30.918915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca20e0 with addr=10.0.0.2, port=4420 00:25:36.556 [2024-07-15 10:38:30.918936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca20e0 is same with the state(5) to be set 00:25:36.556 [2024-07-15 10:38:30.918964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca20e0 (9): Bad file descriptor 00:25:36.556 [2024-07-15 10:38:30.918984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.556 [2024-07-15 10:38:30.919006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.556 [2024-07-15 10:38:30.919023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.556 Unable to reset the controller. 00:25:36.815 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.815 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:36.815 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:36.815 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.815 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.073 Malloc0 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.073 [2024-07-15 10:38:31.505283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.073 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.074 [2024-07-15 10:38:31.533562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.074 10:38:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2417614 00:25:37.332 Controller properly reset. 00:25:42.613 Initializing NVMe Controllers 00:25:42.613 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:42.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:42.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:42.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:42.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:42.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:42.613 Initialization complete. Launching workers. 00:25:42.613 Starting thread on core 1 00:25:42.613 Starting thread on core 2 00:25:42.613 Starting thread on core 3 00:25:42.613 Starting thread on core 0 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:42.613 00:25:42.613 real 0m11.434s 00:25:42.613 user 0m35.339s 00:25:42.613 sys 0m7.606s 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.613 ************************************ 00:25:42.613 END TEST nvmf_target_disconnect_tc2 00:25:42.613 ************************************ 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.613 rmmod nvme_tcp 00:25:42.613 rmmod nvme_fabrics 00:25:42.613 rmmod nvme_keyring 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2418024 ']' 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2418024 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2418024 ']' 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2418024 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2418024 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2418024' 00:25:42.613 killing process with pid 2418024 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2418024 00:25:42.613 10:38:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2418024 00:25:42.613 10:38:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.613 10:38:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:42.613 10:38:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:42.613 10:38:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.613 10:38:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.613 10:38:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.613 10:38:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.613 10:38:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.150 10:38:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:45.150 00:25:45.150 real 0m16.194s 00:25:45.150 user 1m1.024s 00:25:45.150 sys 0m10.061s 00:25:45.150 10:38:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.150 10:38:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:45.150 ************************************ 00:25:45.150 END TEST nvmf_target_disconnect 00:25:45.150 ************************************ 00:25:45.150 10:38:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:45.150 10:38:39 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:45.150 10:38:39 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:45.150 10:38:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.150 10:38:39 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:45.150 00:25:45.150 real 19m42.975s 00:25:45.150 user 46m57.948s 00:25:45.150 sys 4m56.873s 00:25:45.150 10:38:39 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.150 10:38:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.150 ************************************ 00:25:45.150 END TEST nvmf_tcp 00:25:45.150 ************************************ 00:25:45.150 10:38:39 -- common/autotest_common.sh@1142 -- # return 0 00:25:45.150 10:38:39 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:45.150 10:38:39 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:45.150 10:38:39 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:45.150 10:38:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.150 10:38:39 -- common/autotest_common.sh@10 -- # set +x 00:25:45.150 ************************************ 00:25:45.150 START TEST spdkcli_nvmf_tcp 00:25:45.150 ************************************ 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:45.150 * Looking for test storage... 00:25:45.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.150 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2419223 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2419223 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2419223 ']' 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.151 [2024-07-15 10:38:39.446378] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:45.151 [2024-07-15 10:38:39.446455] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419223 ] 00:25:45.151 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.151 [2024-07-15 10:38:39.503991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:45.151 [2024-07-15 10:38:39.617535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.151 [2024-07-15 10:38:39.617539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.151 10:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:45.151 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:45.151 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:45.151 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:45.151 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:45.151 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:45.151 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:45.151 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:45.151 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:45.151 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:45.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:45.151 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:45.151 ' 00:25:47.676 [2024-07-15 10:38:42.306818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.047 [2024-07-15 10:38:43.547143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:51.576 [2024-07-15 10:38:45.826459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:53.472 [2024-07-15 10:38:47.772652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:54.844 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:54.844 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:54.844 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:54.844 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:54.844 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:54.844 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:54.844 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:54.844 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:54.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:54.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:54.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:54.844 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:54.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:54.845 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:54.845 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:54.845 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:54.845 10:38:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:54.845 10:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:54.845 10:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:54.845 10:38:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:54.845 10:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:54.845 10:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:54.845 10:38:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:54.845 10:38:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:55.411 10:38:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:55.411 10:38:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:55.411 10:38:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:55.411 10:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:55.411 10:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:55.411 10:38:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:55.411 10:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:55.411 10:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:55.411 10:38:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:55.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:55.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:55.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:55.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:55.411 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:55.411 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:55.411 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:55.411 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:55.411 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:55.411 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:55.411 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:55.411 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:55.411 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:55.411 ' 00:26:00.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:00.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:00.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:00.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:00.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:00.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:00.674 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:00.674 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:00.674 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:00.674 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:00.674 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:00.674 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:00.674 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:00.674 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2419223 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2419223 ']' 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2419223 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419223 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419223' 00:26:00.674 killing process with pid 2419223 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2419223 00:26:00.674 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2419223 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2419223 ']' 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2419223 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2419223 ']' 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2419223 00:26:00.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2419223) - No such process 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2419223 is not found' 00:26:00.933 Process with pid 2419223 is not found 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:00.933 00:26:00.933 real 0m16.099s 00:26:00.933 user 0m34.019s 00:26:00.933 sys 0m0.816s 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:00.933 10:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:00.933 ************************************ 00:26:00.933 END TEST spdkcli_nvmf_tcp 00:26:00.933 ************************************ 00:26:00.933 10:38:55 -- common/autotest_common.sh@1142 -- # return 0 00:26:00.933 10:38:55 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:00.933 10:38:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:00.933 10:38:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.933 10:38:55 -- common/autotest_common.sh@10 -- # set +x 00:26:00.933 ************************************ 00:26:00.933 START TEST nvmf_identify_passthru 00:26:00.933 ************************************ 00:26:00.933 10:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:00.933 * Looking for test storage... 00:26:00.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:00.933 10:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.933 10:38:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.933 10:38:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.933 10:38:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.933 10:38:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.933 10:38:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.933 10:38:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.933 10:38:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:00.933 10:38:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:00.933 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:00.934 10:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.934 10:38:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.934 10:38:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.934 10:38:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.934 10:38:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.934 10:38:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.934 10:38:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.934 10:38:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:00.934 10:38:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.934 10:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.934 10:38:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:00.934 10:38:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:00.934 10:38:55 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:00.934 10:38:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:02.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:02.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:02.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:02.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:02.839 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:03.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:26:03.099 00:26:03.099 --- 10.0.0.2 ping statistics --- 00:26:03.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.099 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:26:03.099 00:26:03.099 --- 10.0.0.1 ping statistics --- 00:26:03.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.099 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:03.099 10:38:57 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:03.099 10:38:57 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:03.099 10:38:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:26:03.099 10:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:26:03.099 10:38:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:26:03.099 10:38:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:26:03.099 10:38:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:26:03.099 10:38:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:03.099 10:38:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:03.099 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.285 10:39:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:26:07.285 10:39:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:26:07.285 10:39:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:07.285 10:39:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:07.285 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.466 10:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:11.466 10:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:11.466 10:39:05 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:11.466 10:39:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:11.466 10:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:11.466 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:11.466 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:11.466 10:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2423958 00:26:11.466 10:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:11.466 10:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:11.466 10:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2423958 00:26:11.466 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2423958 ']' 00:26:11.466 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.466 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:11.466 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.466 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:11.466 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:11.466 [2024-07-15 10:39:06.059059] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:11.466 [2024-07-15 10:39:06.059165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.466 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.725 [2024-07-15 10:39:06.126413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:11.725 [2024-07-15 10:39:06.239253] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.725 [2024-07-15 10:39:06.239316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.725 [2024-07-15 10:39:06.239344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.725 [2024-07-15 10:39:06.239356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.725 [2024-07-15 10:39:06.239366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.725 [2024-07-15 10:39:06.239430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.725 [2024-07-15 10:39:06.239499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.725 [2024-07-15 10:39:06.239564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:11.725 [2024-07-15 10:39:06.239568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.725 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:11.725 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:26:11.725 10:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:11.725 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.725 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:11.725 INFO: Log level set to 20 00:26:11.725 INFO: Requests: 00:26:11.725 { 00:26:11.725 "jsonrpc": "2.0", 00:26:11.725 "method": "nvmf_set_config", 00:26:11.725 "id": 1, 00:26:11.725 "params": { 00:26:11.725 "admin_cmd_passthru": { 00:26:11.725 "identify_ctrlr": true 00:26:11.725 } 00:26:11.725 } 00:26:11.725 } 00:26:11.725 00:26:11.725 INFO: response: 00:26:11.725 { 00:26:11.725 "jsonrpc": "2.0", 00:26:11.725 "id": 1, 00:26:11.725 "result": true 00:26:11.725 } 00:26:11.725 00:26:11.725 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.725 10:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:11.725 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.725 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:11.725 INFO: Setting log level to 20 00:26:11.725 INFO: Setting log level to 20 00:26:11.725 INFO: Log level set to 20 00:26:11.725 INFO: Log level set to 20 00:26:11.725 INFO: Requests: 00:26:11.725 { 00:26:11.725 "jsonrpc": "2.0", 00:26:11.725 "method": "framework_start_init", 00:26:11.725 "id": 1 00:26:11.725 } 00:26:11.725 00:26:11.725 INFO: Requests: 00:26:11.725 { 00:26:11.725 "jsonrpc": "2.0", 00:26:11.725 "method": "framework_start_init", 00:26:11.725 "id": 1 00:26:11.725 } 00:26:11.725 00:26:11.983 [2024-07-15 10:39:06.386260] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:11.983 INFO: response: 00:26:11.983 { 00:26:11.983 "jsonrpc": "2.0", 00:26:11.983 "id": 1, 00:26:11.983 "result": true 00:26:11.983 } 00:26:11.983 00:26:11.983 INFO: response: 00:26:11.983 { 00:26:11.983 "jsonrpc": "2.0", 00:26:11.983 "id": 1, 00:26:11.983 "result": true 00:26:11.983 } 00:26:11.983 00:26:11.983 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.983 10:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:11.983 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.983 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:11.983 INFO: Setting log level to 40 00:26:11.983 INFO: Setting log level to 40 00:26:11.983 INFO: Setting log level to 40 00:26:11.983 [2024-07-15 10:39:06.396348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.983 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.983 10:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:11.983 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:11.983 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:11.983 10:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:26:11.983 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.983 10:39:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:15.300 Nvme0n1 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:15.300 [2024-07-15 10:39:09.294046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:15.300 [ 00:26:15.300 { 00:26:15.300 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:15.300 "subtype": "Discovery", 00:26:15.300 "listen_addresses": [], 00:26:15.300 "allow_any_host": true, 00:26:15.300 "hosts": [] 00:26:15.300 }, 00:26:15.300 { 00:26:15.300 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.300 "subtype": "NVMe", 00:26:15.300 "listen_addresses": [ 00:26:15.300 { 00:26:15.300 "trtype": "TCP", 00:26:15.300 "adrfam": "IPv4", 00:26:15.300 "traddr": "10.0.0.2", 00:26:15.300 "trsvcid": "4420" 00:26:15.300 } 00:26:15.300 ], 00:26:15.300 "allow_any_host": true, 00:26:15.300 "hosts": [], 00:26:15.300 "serial_number": "SPDK00000000000001", 00:26:15.300 "model_number": "SPDK bdev Controller", 00:26:15.300 "max_namespaces": 1, 00:26:15.300 "min_cntlid": 1, 00:26:15.300 "max_cntlid": 65519, 00:26:15.300 "namespaces": [ 00:26:15.300 { 00:26:15.300 "nsid": 1, 00:26:15.300 "bdev_name": "Nvme0n1", 00:26:15.300 "name": "Nvme0n1", 00:26:15.300 "nguid": "AFB94AC5F5C24A6E871A8BD9A6A5E5EC", 00:26:15.300 "uuid": "afb94ac5-f5c2-4a6e-871a-8bd9a6a5e5ec" 00:26:15.300 } 00:26:15.300 ] 00:26:15.300 } 00:26:15.300 ] 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:15.300 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:15.300 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:15.300 10:39:09 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:15.300 rmmod nvme_tcp 00:26:15.300 rmmod nvme_fabrics 00:26:15.300 rmmod nvme_keyring 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2423958 ']' 00:26:15.300 10:39:09 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2423958 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2423958 ']' 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2423958 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2423958 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2423958' 00:26:15.300 killing process with pid 2423958 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2423958 00:26:15.300 10:39:09 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2423958 00:26:17.199 10:39:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:17.199 10:39:11 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:17.199 10:39:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:17.199 10:39:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:17.199 10:39:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:17.199 10:39:11 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.199 10:39:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:17.199 10:39:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.103 10:39:13 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:19.103 00:26:19.103 real 0m17.924s 00:26:19.103 user 0m26.616s 00:26:19.103 sys 0m2.295s 00:26:19.103 10:39:13 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:19.103 10:39:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:19.103 ************************************ 00:26:19.103 END TEST nvmf_identify_passthru 00:26:19.103 ************************************ 00:26:19.103 10:39:13 -- common/autotest_common.sh@1142 -- # return 0 00:26:19.103 10:39:13 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:19.103 10:39:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:19.103 10:39:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.103 10:39:13 -- common/autotest_common.sh@10 -- # set +x 00:26:19.103 ************************************ 00:26:19.103 START TEST nvmf_dif 00:26:19.103 ************************************ 00:26:19.103 10:39:13 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:19.103 * Looking for test storage... 00:26:19.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:19.103 10:39:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.103 10:39:13 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.103 10:39:13 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.103 10:39:13 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.103 10:39:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.103 10:39:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.103 10:39:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.103 10:39:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:19.103 10:39:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:19.103 10:39:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:19.103 10:39:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:19.103 10:39:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:19.103 10:39:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:19.103 10:39:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.103 10:39:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:19.103 10:39:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:19.103 10:39:13 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:26:19.103 10:39:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:21.006 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:21.006 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:21.006 10:39:15 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:21.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:21.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:21.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:26:21.007 00:26:21.007 --- 10.0.0.2 ping statistics --- 00:26:21.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.007 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:26:21.007 00:26:21.007 --- 10.0.0.1 ping statistics --- 00:26:21.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.007 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:21.007 10:39:15 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:22.385 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:22.385 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:22.385 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:22.385 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:22.385 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:22.385 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:22.385 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:22.385 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:22.385 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:22.385 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:22.385 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:22.385 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:22.385 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:22.385 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:22.385 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:22.385 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:22.385 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:22.385 10:39:16 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.385 10:39:16 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:22.385 10:39:16 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:22.385 10:39:16 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.385 10:39:16 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:22.385 10:39:16 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:22.385 10:39:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:22.385 10:39:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:22.385 10:39:16 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.385 10:39:16 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:22.385 10:39:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.385 10:39:16 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2427608 00:26:22.385 10:39:16 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:22.385 10:39:16 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2427608 00:26:22.385 10:39:16 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2427608 ']' 00:26:22.385 10:39:16 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.385 10:39:16 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.385 10:39:16 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.385 10:39:16 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.385 10:39:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.385 [2024-07-15 10:39:16.871752] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:22.385 [2024-07-15 10:39:16.871839] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.385 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.385 [2024-07-15 10:39:16.933840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.644 [2024-07-15 10:39:17.039870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.644 [2024-07-15 10:39:17.039931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.644 [2024-07-15 10:39:17.039944] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.644 [2024-07-15 10:39:17.039955] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.644 [2024-07-15 10:39:17.039964] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.644 [2024-07-15 10:39:17.040004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.644 10:39:17 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.644 10:39:17 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:22.644 10:39:17 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:22.644 10:39:17 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:22.644 10:39:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.644 10:39:17 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.644 10:39:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:22.644 10:39:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:22.644 10:39:17 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.644 10:39:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.644 [2024-07-15 10:39:17.174868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.644 10:39:17 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.644 10:39:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:22.644 10:39:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:22.644 10:39:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.644 10:39:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.644 ************************************ 00:26:22.644 START TEST fio_dif_1_default 00:26:22.644 ************************************ 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:22.644 bdev_null0 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:22.644 [2024-07-15 10:39:17.231188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.644 { 00:26:22.644 "params": { 00:26:22.644 "name": "Nvme$subsystem", 00:26:22.644 "trtype": "$TEST_TRANSPORT", 00:26:22.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.644 "adrfam": "ipv4", 00:26:22.644 "trsvcid": "$NVMF_PORT", 00:26:22.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.644 "hdgst": ${hdgst:-false}, 00:26:22.644 "ddgst": ${ddgst:-false} 00:26:22.644 }, 00:26:22.644 "method": "bdev_nvme_attach_controller" 00:26:22.644 } 00:26:22.644 EOF 00:26:22.644 )") 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:22.644 "params": { 00:26:22.644 "name": "Nvme0", 00:26:22.644 "trtype": "tcp", 00:26:22.644 "traddr": "10.0.0.2", 00:26:22.644 "adrfam": "ipv4", 00:26:22.644 "trsvcid": "4420", 00:26:22.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:22.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:22.644 "hdgst": false, 00:26:22.644 "ddgst": false 00:26:22.644 }, 00:26:22.644 "method": "bdev_nvme_attach_controller" 00:26:22.644 }' 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:22.644 10:39:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.903 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:22.903 fio-3.35 00:26:22.903 Starting 1 thread 00:26:22.903 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.099 00:26:35.099 filename0: (groupid=0, jobs=1): err= 0: pid=2427835: Mon Jul 15 10:39:28 2024 00:26:35.099 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10001msec) 00:26:35.099 slat (nsec): min=5238, max=72830, avg=10478.69, stdev=5148.19 00:26:35.099 clat (usec): min=683, max=46308, avg=21020.10, stdev=20206.92 00:26:35.099 lat (usec): min=691, max=46324, avg=21030.58, stdev=20207.91 00:26:35.099 clat percentiles (usec): 00:26:35.099 | 1.00th=[ 709], 5.00th=[ 725], 10.00th=[ 734], 20.00th=[ 766], 00:26:35.099 | 30.00th=[ 791], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:26:35.099 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:35.099 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:26:35.099 | 99.99th=[46400] 00:26:35.099 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=20.18, samples=19 00:26:35.099 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:26:35.099 lat (usec) : 750=16.68%, 1000=33.21% 00:26:35.099 lat (msec) : 50=50.11% 00:26:35.099 cpu : usr=89.60%, sys=10.08%, ctx=36, majf=0, minf=232 00:26:35.099 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.099 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.099 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:35.099 00:26:35.099 Run status group 0 (all jobs): 00:26:35.099 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10001-10001msec 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.099 00:26:35.099 real 0m11.162s 00:26:35.099 user 0m10.154s 00:26:35.099 sys 0m1.276s 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 ************************************ 00:26:35.099 END TEST fio_dif_1_default 00:26:35.099 ************************************ 00:26:35.099 10:39:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:35.099 10:39:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:35.099 10:39:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:35.099 10:39:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 ************************************ 00:26:35.099 START TEST fio_dif_1_multi_subsystems 00:26:35.099 ************************************ 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 bdev_null0 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 [2024-07-15 10:39:28.447686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 bdev_null1 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:35.099 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:35.100 { 00:26:35.100 "params": { 00:26:35.100 "name": "Nvme$subsystem", 00:26:35.100 "trtype": "$TEST_TRANSPORT", 00:26:35.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.100 "adrfam": "ipv4", 00:26:35.100 "trsvcid": "$NVMF_PORT", 00:26:35.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.100 "hdgst": ${hdgst:-false}, 00:26:35.100 "ddgst": ${ddgst:-false} 00:26:35.100 }, 00:26:35.100 "method": "bdev_nvme_attach_controller" 00:26:35.100 } 00:26:35.100 EOF 00:26:35.100 )") 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:35.100 { 00:26:35.100 "params": { 00:26:35.100 "name": "Nvme$subsystem", 00:26:35.100 "trtype": "$TEST_TRANSPORT", 00:26:35.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.100 "adrfam": "ipv4", 00:26:35.100 "trsvcid": "$NVMF_PORT", 00:26:35.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.100 "hdgst": ${hdgst:-false}, 00:26:35.100 "ddgst": ${ddgst:-false} 00:26:35.100 }, 00:26:35.100 "method": "bdev_nvme_attach_controller" 00:26:35.100 } 00:26:35.100 EOF 00:26:35.100 )") 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:35.100 "params": { 00:26:35.100 "name": "Nvme0", 00:26:35.100 "trtype": "tcp", 00:26:35.100 "traddr": "10.0.0.2", 00:26:35.100 "adrfam": "ipv4", 00:26:35.100 "trsvcid": "4420", 00:26:35.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:35.100 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:35.100 "hdgst": false, 00:26:35.100 "ddgst": false 00:26:35.100 }, 00:26:35.100 "method": "bdev_nvme_attach_controller" 00:26:35.100 },{ 00:26:35.100 "params": { 00:26:35.100 "name": "Nvme1", 00:26:35.100 "trtype": "tcp", 00:26:35.100 "traddr": "10.0.0.2", 00:26:35.100 "adrfam": "ipv4", 00:26:35.100 "trsvcid": "4420", 00:26:35.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.100 "hdgst": false, 00:26:35.100 "ddgst": false 00:26:35.100 }, 00:26:35.100 "method": "bdev_nvme_attach_controller" 00:26:35.100 }' 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:35.100 10:39:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.100 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:35.100 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:35.100 fio-3.35 00:26:35.100 Starting 2 threads 00:26:35.100 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.083 00:26:45.083 filename0: (groupid=0, jobs=1): err= 0: pid=2429234: Mon Jul 15 10:39:39 2024 00:26:45.083 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10039msec) 00:26:45.083 slat (nsec): min=7003, max=47534, avg=9226.34, stdev=3445.02 00:26:45.083 clat (usec): min=40855, max=42978, avg=41796.17, stdev=398.79 00:26:45.083 lat (usec): min=40873, max=42991, avg=41805.40, stdev=398.88 00:26:45.083 clat percentiles (usec): 00:26:45.083 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:26:45.083 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:26:45.083 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:45.083 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:26:45.083 | 99.99th=[42730] 00:26:45.083 bw ( KiB/s): min= 352, max= 384, per=33.48%, avg=382.40, stdev= 7.16, samples=20 00:26:45.083 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:26:45.083 lat (msec) : 50=100.00% 00:26:45.083 cpu : usr=94.21%, sys=5.47%, ctx=21, majf=0, minf=134 00:26:45.083 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.083 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.083 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:45.083 filename1: (groupid=0, jobs=1): err= 0: pid=2429235: Mon Jul 15 10:39:39 2024 00:26:45.083 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:26:45.083 slat (nsec): min=7004, max=62141, avg=8678.55, stdev=2924.95 00:26:45.083 clat (usec): min=705, max=42009, avg=20981.60, stdev=20177.01 00:26:45.083 lat (usec): min=713, max=42062, avg=20990.28, stdev=20176.89 00:26:45.083 clat percentiles (usec): 00:26:45.083 | 1.00th=[ 758], 5.00th=[ 783], 10.00th=[ 783], 20.00th=[ 799], 00:26:45.083 | 30.00th=[ 807], 40.00th=[ 816], 50.00th=[ 1336], 60.00th=[41157], 00:26:45.083 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:45.083 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:45.083 | 99.99th=[42206] 00:26:45.083 bw ( KiB/s): min= 704, max= 768, per=66.69%, avg=761.26, stdev=20.18, samples=19 00:26:45.083 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:26:45.083 lat (usec) : 750=0.79%, 1000=48.63% 00:26:45.083 lat (msec) : 2=0.58%, 50=50.00% 00:26:45.083 cpu : usr=94.04%, sys=5.64%, ctx=14, majf=0, minf=158 00:26:45.083 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.083 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.083 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:45.083 00:26:45.083 Run status group 0 (all jobs): 00:26:45.083 READ: bw=1141KiB/s (1169kB/s), 383KiB/s-762KiB/s (392kB/s-780kB/s), io=11.2MiB (11.7MB), run=10001-10039msec 00:26:45.083 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:45.083 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:45.083 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:45.083 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:45.083 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:45.083 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:45.083 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.083 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.083 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.083 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.343 00:26:45.343 real 0m11.340s 00:26:45.343 user 0m20.135s 00:26:45.343 sys 0m1.391s 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.343 10:39:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.343 ************************************ 00:26:45.343 END TEST fio_dif_1_multi_subsystems 00:26:45.343 ************************************ 00:26:45.343 10:39:39 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:45.343 10:39:39 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:45.343 10:39:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:45.343 10:39:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.343 10:39:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:45.343 ************************************ 00:26:45.343 START TEST fio_dif_rand_params 00:26:45.343 ************************************ 00:26:45.343 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:45.343 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:45.343 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:45.343 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:45.343 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:45.344 bdev_null0 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:45.344 [2024-07-15 10:39:39.840396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.344 { 00:26:45.344 "params": { 00:26:45.344 "name": "Nvme$subsystem", 00:26:45.344 "trtype": "$TEST_TRANSPORT", 00:26:45.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.344 "adrfam": "ipv4", 00:26:45.344 "trsvcid": "$NVMF_PORT", 00:26:45.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.344 "hdgst": ${hdgst:-false}, 00:26:45.344 "ddgst": ${ddgst:-false} 00:26:45.344 }, 00:26:45.344 "method": "bdev_nvme_attach_controller" 00:26:45.344 } 00:26:45.344 EOF 00:26:45.344 )") 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:45.344 "params": { 00:26:45.344 "name": "Nvme0", 00:26:45.344 "trtype": "tcp", 00:26:45.344 "traddr": "10.0.0.2", 00:26:45.344 "adrfam": "ipv4", 00:26:45.344 "trsvcid": "4420", 00:26:45.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:45.344 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:45.344 "hdgst": false, 00:26:45.344 "ddgst": false 00:26:45.344 }, 00:26:45.344 "method": "bdev_nvme_attach_controller" 00:26:45.344 }' 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:45.344 10:39:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:45.602 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:45.602 ... 00:26:45.602 fio-3.35 00:26:45.602 Starting 3 threads 00:26:45.602 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.163 00:26:52.163 filename0: (groupid=0, jobs=1): err= 0: pid=2430636: Mon Jul 15 10:39:45 2024 00:26:52.163 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(127MiB/5036msec) 00:26:52.163 slat (nsec): min=4984, max=76474, avg=15302.57, stdev=6243.68 00:26:52.163 clat (usec): min=5385, max=90570, avg=14818.61, stdev=12712.15 00:26:52.163 lat (usec): min=5396, max=90583, avg=14833.92, stdev=12712.34 00:26:52.163 clat percentiles (usec): 00:26:52.163 | 1.00th=[ 5669], 5.00th=[ 6259], 10.00th=[ 7177], 20.00th=[ 8455], 00:26:52.163 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[11076], 60.00th=[11994], 00:26:52.163 | 70.00th=[12911], 80.00th=[14091], 90.00th=[46924], 95.00th=[51119], 00:26:52.163 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55837], 99.95th=[90702], 00:26:52.163 | 99.99th=[90702] 00:26:52.163 bw ( KiB/s): min=18432, max=32256, per=34.36%, avg=25984.00, stdev=4538.34, samples=10 00:26:52.163 iops : min= 144, max= 252, avg=203.00, stdev=35.46, samples=10 00:26:52.163 lat (msec) : 10=42.14%, 20=47.35%, 50=3.73%, 100=6.78% 00:26:52.163 cpu : usr=92.79%, sys=6.22%, ctx=124, majf=0, minf=163 00:26:52.163 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.163 issued rwts: total=1018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:52.163 filename0: (groupid=0, jobs=1): err= 0: pid=2430637: Mon Jul 15 10:39:45 2024 00:26:52.163 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(127MiB/5006msec) 00:26:52.163 slat (nsec): min=7282, max=45365, avg=14948.27, stdev=5460.87 00:26:52.163 clat (usec): min=4772, max=93697, avg=14715.13, stdev=13101.71 00:26:52.163 lat (usec): min=4785, max=93710, avg=14730.07, stdev=13101.87 00:26:52.163 clat percentiles (usec): 00:26:52.163 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6194], 20.00th=[ 8029], 00:26:52.163 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10814], 60.00th=[11994], 00:26:52.163 | 70.00th=[12911], 80.00th=[14222], 90.00th=[46400], 95.00th=[51119], 00:26:52.163 | 99.00th=[54789], 99.50th=[55313], 99.90th=[86508], 99.95th=[93848], 00:26:52.163 | 99.99th=[93848] 00:26:52.163 bw ( KiB/s): min=19200, max=35584, per=34.40%, avg=26014.20, stdev=4442.76, samples=10 00:26:52.163 iops : min= 150, max= 278, avg=203.20, stdev=34.73, samples=10 00:26:52.163 lat (msec) : 10=42.98%, 20=46.61%, 50=3.63%, 100=6.77% 00:26:52.163 cpu : usr=93.19%, sys=6.37%, ctx=9, majf=0, minf=103 00:26:52.163 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.163 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:52.163 filename0: (groupid=0, jobs=1): err= 0: pid=2430638: Mon Jul 15 10:39:45 2024 00:26:52.163 read: IOPS=187, BW=23.4MiB/s (24.6MB/s)(117MiB/5004msec) 00:26:52.163 slat (nsec): min=4917, max=48437, avg=17383.95, stdev=5917.67 00:26:52.163 clat (usec): min=5112, max=90616, avg=15978.26, stdev=14488.49 00:26:52.163 lat (usec): min=5124, max=90631, avg=15995.64, stdev=14488.64 00:26:52.163 clat percentiles (usec): 00:26:52.163 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 8455], 00:26:52.163 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[11207], 60.00th=[12125], 00:26:52.163 | 70.00th=[13304], 80.00th=[15139], 90.00th=[49021], 95.00th=[52167], 00:26:52.163 | 99.00th=[55837], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:26:52.163 | 99.99th=[90702] 00:26:52.163 bw ( KiB/s): min=18432, max=32256, per=31.65%, avg=23936.00, stdev=4351.58, samples=10 00:26:52.163 iops : min= 144, max= 252, avg=187.00, stdev=34.00, samples=10 00:26:52.163 lat (msec) : 10=40.51%, 20=46.91%, 50=4.48%, 100=8.10% 00:26:52.163 cpu : usr=94.18%, sys=5.34%, ctx=13, majf=0, minf=92 00:26:52.163 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.163 issued rwts: total=938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:52.163 00:26:52.163 Run status group 0 (all jobs): 00:26:52.163 READ: bw=73.8MiB/s (77.4MB/s), 23.4MiB/s-25.4MiB/s (24.6MB/s-26.7MB/s), io=372MiB (390MB), run=5004-5036msec 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:52.163 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 bdev_null0 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 [2024-07-15 10:39:45.996237] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:52.164 10:39:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 bdev_null1 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 bdev_null2 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.164 { 00:26:52.164 "params": { 00:26:52.164 "name": "Nvme$subsystem", 00:26:52.164 "trtype": "$TEST_TRANSPORT", 00:26:52.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.164 "adrfam": "ipv4", 00:26:52.164 "trsvcid": "$NVMF_PORT", 00:26:52.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.164 "hdgst": ${hdgst:-false}, 00:26:52.164 "ddgst": ${ddgst:-false} 00:26:52.164 }, 00:26:52.164 "method": "bdev_nvme_attach_controller" 00:26:52.164 } 00:26:52.164 EOF 00:26:52.164 )") 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.164 { 00:26:52.164 "params": { 00:26:52.164 "name": "Nvme$subsystem", 00:26:52.164 "trtype": "$TEST_TRANSPORT", 00:26:52.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.164 "adrfam": "ipv4", 00:26:52.164 "trsvcid": "$NVMF_PORT", 00:26:52.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.164 "hdgst": ${hdgst:-false}, 00:26:52.164 "ddgst": ${ddgst:-false} 00:26:52.164 }, 00:26:52.164 "method": "bdev_nvme_attach_controller" 00:26:52.164 } 00:26:52.164 EOF 00:26:52.164 )") 00:26:52.164 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.165 { 00:26:52.165 "params": { 00:26:52.165 "name": "Nvme$subsystem", 00:26:52.165 "trtype": "$TEST_TRANSPORT", 00:26:52.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.165 "adrfam": "ipv4", 00:26:52.165 "trsvcid": "$NVMF_PORT", 00:26:52.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.165 "hdgst": ${hdgst:-false}, 00:26:52.165 "ddgst": ${ddgst:-false} 00:26:52.165 }, 00:26:52.165 "method": "bdev_nvme_attach_controller" 00:26:52.165 } 00:26:52.165 EOF 00:26:52.165 )") 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:52.165 "params": { 00:26:52.165 "name": "Nvme0", 00:26:52.165 "trtype": "tcp", 00:26:52.165 "traddr": "10.0.0.2", 00:26:52.165 "adrfam": "ipv4", 00:26:52.165 "trsvcid": "4420", 00:26:52.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:52.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:52.165 "hdgst": false, 00:26:52.165 "ddgst": false 00:26:52.165 }, 00:26:52.165 "method": "bdev_nvme_attach_controller" 00:26:52.165 },{ 00:26:52.165 "params": { 00:26:52.165 "name": "Nvme1", 00:26:52.165 "trtype": "tcp", 00:26:52.165 "traddr": "10.0.0.2", 00:26:52.165 "adrfam": "ipv4", 00:26:52.165 "trsvcid": "4420", 00:26:52.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:52.165 "hdgst": false, 00:26:52.165 "ddgst": false 00:26:52.165 }, 00:26:52.165 "method": "bdev_nvme_attach_controller" 00:26:52.165 },{ 00:26:52.165 "params": { 00:26:52.165 "name": "Nvme2", 00:26:52.165 "trtype": "tcp", 00:26:52.165 "traddr": "10.0.0.2", 00:26:52.165 "adrfam": "ipv4", 00:26:52.165 "trsvcid": "4420", 00:26:52.165 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:52.165 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:52.165 "hdgst": false, 00:26:52.165 "ddgst": false 00:26:52.165 }, 00:26:52.165 "method": "bdev_nvme_attach_controller" 00:26:52.165 }' 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:52.165 10:39:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:52.165 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:52.165 ... 00:26:52.165 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:52.165 ... 00:26:52.165 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:52.165 ... 00:26:52.165 fio-3.35 00:26:52.165 Starting 24 threads 00:26:52.165 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.358 00:27:04.358 filename0: (groupid=0, jobs=1): err= 0: pid=2431500: Mon Jul 15 10:39:57 2024 00:27:04.358 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10014msec) 00:27:04.358 slat (usec): min=10, max=107, avg=42.62, stdev=17.83 00:27:04.358 clat (usec): min=25253, max=47956, avg=33037.43, stdev=1191.44 00:27:04.358 lat (usec): min=25297, max=47986, avg=33080.04, stdev=1189.05 00:27:04.358 clat percentiles (usec): 00:27:04.358 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:27:04.358 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:27:04.358 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.358 | 99.00th=[36439], 99.50th=[36439], 99.90th=[47973], 99.95th=[47973], 00:27:04.358 | 99.99th=[47973] 00:27:04.358 bw ( KiB/s): min= 1664, max= 2043, per=4.15%, avg=1913.35, stdev=64.80, samples=20 00:27:04.358 iops : min= 416, max= 510, avg=478.30, stdev=16.12, samples=20 00:27:04.358 lat (msec) : 50=100.00% 00:27:04.358 cpu : usr=97.17%, sys=1.89%, ctx=246, majf=0, minf=57 00:27:04.358 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.358 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.358 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.358 filename0: (groupid=0, jobs=1): err= 0: pid=2431501: Mon Jul 15 10:39:57 2024 00:27:04.358 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10015msec) 00:27:04.358 slat (usec): min=8, max=126, avg=35.95, stdev=21.59 00:27:04.358 clat (usec): min=10247, max=37299, avg=32852.95, stdev=1940.27 00:27:04.358 lat (usec): min=10263, max=37344, avg=32888.90, stdev=1937.30 00:27:04.358 clat percentiles (usec): 00:27:04.358 | 1.00th=[29230], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:27:04.358 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:27:04.358 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:27:04.358 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:27:04.358 | 99.99th=[37487] 00:27:04.358 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1926.20, stdev=50.47, samples=20 00:27:04.358 iops : min= 448, max= 512, avg=481.55, stdev=12.62, samples=20 00:27:04.358 lat (msec) : 20=0.66%, 50=99.34% 00:27:04.358 cpu : usr=97.84%, sys=1.75%, ctx=35, majf=0, minf=44 00:27:04.358 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.358 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.358 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.358 filename0: (groupid=0, jobs=1): err= 0: pid=2431502: Mon Jul 15 10:39:57 2024 00:27:04.358 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10012msec) 00:27:04.358 slat (usec): min=14, max=134, avg=46.91, stdev=17.13 00:27:04.358 clat (usec): min=25196, max=48005, avg=32933.05, stdev=1224.00 00:27:04.358 lat (usec): min=25234, max=48036, avg=32979.95, stdev=1223.96 00:27:04.358 clat percentiles (usec): 00:27:04.358 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:27:04.358 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:27:04.358 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.358 | 99.00th=[36439], 99.50th=[36963], 99.90th=[47973], 99.95th=[47973], 00:27:04.358 | 99.99th=[47973] 00:27:04.358 bw ( KiB/s): min= 1664, max= 2043, per=4.15%, avg=1913.35, stdev=64.80, samples=20 00:27:04.358 iops : min= 416, max= 510, avg=478.30, stdev=16.12, samples=20 00:27:04.358 lat (msec) : 50=100.00% 00:27:04.358 cpu : usr=91.20%, sys=4.67%, ctx=402, majf=0, minf=56 00:27:04.358 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:04.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.358 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.358 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.358 filename0: (groupid=0, jobs=1): err= 0: pid=2431503: Mon Jul 15 10:39:57 2024 00:27:04.358 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10015msec) 00:27:04.358 slat (nsec): min=8184, max=70388, avg=31075.45, stdev=10575.30 00:27:04.358 clat (usec): min=25021, max=55987, avg=33103.90, stdev=1547.20 00:27:04.358 lat (usec): min=25044, max=56012, avg=33134.97, stdev=1547.74 00:27:04.358 clat percentiles (usec): 00:27:04.358 | 1.00th=[32375], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:27:04.358 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:27:04.358 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.358 | 99.00th=[36439], 99.50th=[36963], 99.90th=[55837], 99.95th=[55837], 00:27:04.358 | 99.99th=[55837] 00:27:04.358 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1913.40, stdev=65.32, samples=20 00:27:04.358 iops : min= 416, max= 512, avg=478.35, stdev=16.33, samples=20 00:27:04.359 lat (msec) : 50=99.67%, 100=0.33% 00:27:04.359 cpu : usr=97.95%, sys=1.67%, ctx=18, majf=0, minf=50 00:27:04.359 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.359 filename0: (groupid=0, jobs=1): err= 0: pid=2431504: Mon Jul 15 10:39:57 2024 00:27:04.359 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10008msec) 00:27:04.359 slat (usec): min=8, max=128, avg=28.94, stdev=18.88 00:27:04.359 clat (usec): min=12825, max=43235, avg=32951.45, stdev=1746.71 00:27:04.359 lat (usec): min=12839, max=43268, avg=32980.39, stdev=1746.38 00:27:04.359 clat percentiles (usec): 00:27:04.359 | 1.00th=[23987], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:27:04.359 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:04.359 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:27:04.359 | 99.00th=[35914], 99.50th=[36439], 99.90th=[43254], 99.95th=[43254], 00:27:04.359 | 99.99th=[43254] 00:27:04.359 bw ( KiB/s): min= 1808, max= 2032, per=4.18%, avg=1925.47, stdev=42.98, samples=19 00:27:04.359 iops : min= 452, max= 508, avg=481.37, stdev=10.75, samples=19 00:27:04.359 lat (msec) : 20=0.48%, 50=99.52% 00:27:04.359 cpu : usr=98.11%, sys=1.47%, ctx=23, majf=0, minf=42 00:27:04.359 IO depths : 1=0.1%, 2=6.0%, 4=24.1%, 8=57.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:27:04.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 complete : 0=0.0%, 4=94.2%, 8=0.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 issued rwts: total=4829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.359 filename0: (groupid=0, jobs=1): err= 0: pid=2431505: Mon Jul 15 10:39:57 2024 00:27:04.359 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10007msec) 00:27:04.359 slat (usec): min=9, max=136, avg=52.59, stdev=21.90 00:27:04.359 clat (usec): min=14741, max=56307, avg=32837.31, stdev=1946.76 00:27:04.359 lat (usec): min=14764, max=56344, avg=32889.91, stdev=1947.34 00:27:04.359 clat percentiles (usec): 00:27:04.359 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:27:04.359 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:27:04.359 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34866], 00:27:04.359 | 99.00th=[36439], 99.50th=[39584], 99.90th=[56361], 99.95th=[56361], 00:27:04.359 | 99.99th=[56361] 00:27:04.359 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1913.26, stdev=67.11, samples=19 00:27:04.359 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:27:04.359 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:27:04.359 cpu : usr=98.11%, sys=1.45%, ctx=13, majf=0, minf=59 00:27:04.359 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:04.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.359 filename0: (groupid=0, jobs=1): err= 0: pid=2431506: Mon Jul 15 10:39:57 2024 00:27:04.359 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:27:04.359 slat (nsec): min=12678, max=79510, avg=41018.24, stdev=11087.31 00:27:04.359 clat (usec): min=14725, max=59414, avg=32972.83, stdev=2015.82 00:27:04.359 lat (usec): min=14738, max=59448, avg=33013.84, stdev=2015.01 00:27:04.359 clat percentiles (usec): 00:27:04.359 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:27:04.359 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:27:04.359 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.359 | 99.00th=[36439], 99.50th=[36439], 99.90th=[59507], 99.95th=[59507], 00:27:04.359 | 99.99th=[59507] 00:27:04.359 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1913.21, stdev=78.99, samples=19 00:27:04.359 iops : min= 416, max= 512, avg=478.26, stdev=19.88, samples=19 00:27:04.359 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:27:04.359 cpu : usr=97.99%, sys=1.63%, ctx=14, majf=0, minf=56 00:27:04.359 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.359 filename0: (groupid=0, jobs=1): err= 0: pid=2431507: Mon Jul 15 10:39:57 2024 00:27:04.359 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:27:04.359 slat (usec): min=9, max=117, avg=49.80, stdev=17.42 00:27:04.359 clat (usec): min=14814, max=65529, avg=32915.45, stdev=2159.68 00:27:04.359 lat (usec): min=14869, max=65567, avg=32965.25, stdev=2159.73 00:27:04.359 clat percentiles (usec): 00:27:04.359 | 1.00th=[30278], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:27:04.359 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:27:04.359 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.359 | 99.00th=[36963], 99.50th=[41681], 99.90th=[58983], 99.95th=[58983], 00:27:04.359 | 99.99th=[65274] 00:27:04.359 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1913.60, stdev=76.19, samples=20 00:27:04.359 iops : min= 416, max= 512, avg=478.40, stdev=19.05, samples=20 00:27:04.359 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:27:04.359 cpu : usr=97.12%, sys=2.00%, ctx=175, majf=0, minf=45 00:27:04.359 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:27:04.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.359 filename1: (groupid=0, jobs=1): err= 0: pid=2431508: Mon Jul 15 10:39:57 2024 00:27:04.359 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10015msec) 00:27:04.359 slat (nsec): min=8369, max=78495, avg=32679.43, stdev=9957.60 00:27:04.359 clat (usec): min=13094, max=80577, avg=33090.03, stdev=2254.78 00:27:04.359 lat (usec): min=13111, max=80596, avg=33122.71, stdev=2254.62 00:27:04.359 clat percentiles (usec): 00:27:04.359 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:27:04.359 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:27:04.359 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.359 | 99.00th=[36439], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:27:04.359 | 99.99th=[80217] 00:27:04.359 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1913.40, stdev=65.32, samples=20 00:27:04.359 iops : min= 416, max= 512, avg=478.35, stdev=16.33, samples=20 00:27:04.359 lat (msec) : 20=0.25%, 50=99.21%, 100=0.54% 00:27:04.359 cpu : usr=94.65%, sys=3.06%, ctx=218, majf=0, minf=59 00:27:04.359 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:04.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.359 filename1: (groupid=0, jobs=1): err= 0: pid=2431509: Mon Jul 15 10:39:57 2024 00:27:04.359 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10006msec) 00:27:04.359 slat (usec): min=6, max=102, avg=44.41, stdev=13.91 00:27:04.359 clat (usec): min=14747, max=63089, avg=32952.50, stdev=2184.93 00:27:04.359 lat (usec): min=14774, max=63110, avg=32996.90, stdev=2183.58 00:27:04.359 clat percentiles (usec): 00:27:04.359 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:27:04.359 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:27:04.359 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34866], 00:27:04.359 | 99.00th=[36439], 99.50th=[36439], 99.90th=[63177], 99.95th=[63177], 00:27:04.359 | 99.99th=[63177] 00:27:04.359 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1913.26, stdev=79.52, samples=19 00:27:04.359 iops : min= 416, max= 512, avg=478.32, stdev=19.88, samples=19 00:27:04.359 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:27:04.359 cpu : usr=94.63%, sys=2.98%, ctx=101, majf=0, minf=71 00:27:04.359 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.359 filename1: (groupid=0, jobs=1): err= 0: pid=2431510: Mon Jul 15 10:39:57 2024 00:27:04.359 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10006msec) 00:27:04.359 slat (usec): min=9, max=113, avg=47.05, stdev=19.18 00:27:04.359 clat (usec): min=16034, max=36978, avg=32862.82, stdev=1392.34 00:27:04.359 lat (usec): min=16055, max=37023, avg=32909.88, stdev=1391.78 00:27:04.359 clat percentiles (usec): 00:27:04.359 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:27:04.359 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:27:04.359 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.359 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:27:04.359 | 99.99th=[36963] 00:27:04.359 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1919.79, stdev=42.68, samples=19 00:27:04.359 iops : min= 448, max= 512, avg=479.95, stdev=10.67, samples=19 00:27:04.359 lat (msec) : 20=0.33%, 50=99.67% 00:27:04.359 cpu : usr=91.02%, sys=4.95%, ctx=396, majf=0, minf=66 00:27:04.359 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.359 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.359 filename1: (groupid=0, jobs=1): err= 0: pid=2431511: Mon Jul 15 10:39:57 2024 00:27:04.359 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10015msec) 00:27:04.359 slat (usec): min=8, max=108, avg=37.46, stdev=14.59 00:27:04.359 clat (usec): min=24955, max=55957, avg=33056.34, stdev=1555.06 00:27:04.359 lat (usec): min=24994, max=55999, avg=33093.80, stdev=1555.13 00:27:04.359 clat percentiles (usec): 00:27:04.359 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:27:04.359 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:27:04.359 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.359 | 99.00th=[36439], 99.50th=[36963], 99.90th=[55837], 99.95th=[55837], 00:27:04.359 | 99.99th=[55837] 00:27:04.359 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1913.40, stdev=65.32, samples=20 00:27:04.359 iops : min= 416, max= 512, avg=478.35, stdev=16.33, samples=20 00:27:04.359 lat (msec) : 50=99.67%, 100=0.33% 00:27:04.359 cpu : usr=97.97%, sys=1.44%, ctx=96, majf=0, minf=45 00:27:04.360 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.360 filename1: (groupid=0, jobs=1): err= 0: pid=2431512: Mon Jul 15 10:39:57 2024 00:27:04.360 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10012msec) 00:27:04.360 slat (usec): min=10, max=136, avg=45.72, stdev=17.12 00:27:04.360 clat (usec): min=25274, max=47998, avg=32973.00, stdev=1207.55 00:27:04.360 lat (usec): min=25312, max=48025, avg=33018.72, stdev=1206.67 00:27:04.360 clat percentiles (usec): 00:27:04.360 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:27:04.360 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:27:04.360 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.360 | 99.00th=[36439], 99.50th=[36963], 99.90th=[47973], 99.95th=[47973], 00:27:04.360 | 99.99th=[47973] 00:27:04.360 bw ( KiB/s): min= 1664, max= 2043, per=4.15%, avg=1913.35, stdev=64.80, samples=20 00:27:04.360 iops : min= 416, max= 510, avg=478.30, stdev=16.12, samples=20 00:27:04.360 lat (msec) : 50=100.00% 00:27:04.360 cpu : usr=97.94%, sys=1.63%, ctx=18, majf=0, minf=44 00:27:04.360 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:04.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.360 filename1: (groupid=0, jobs=1): err= 0: pid=2431513: Mon Jul 15 10:39:57 2024 00:27:04.360 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10004msec) 00:27:04.360 slat (usec): min=9, max=119, avg=49.79, stdev=19.04 00:27:04.360 clat (usec): min=14740, max=60244, avg=32875.07, stdev=2056.67 00:27:04.360 lat (usec): min=14776, max=60277, avg=32924.86, stdev=2056.51 00:27:04.360 clat percentiles (usec): 00:27:04.360 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:27:04.360 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:27:04.360 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34866], 00:27:04.360 | 99.00th=[36439], 99.50th=[36439], 99.90th=[60031], 99.95th=[60031], 00:27:04.360 | 99.99th=[60031] 00:27:04.360 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1913.05, stdev=79.51, samples=19 00:27:04.360 iops : min= 416, max= 512, avg=478.26, stdev=19.88, samples=19 00:27:04.360 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:27:04.360 cpu : usr=94.13%, sys=3.13%, ctx=129, majf=0, minf=54 00:27:04.360 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.360 filename1: (groupid=0, jobs=1): err= 0: pid=2431514: Mon Jul 15 10:39:57 2024 00:27:04.360 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.7MiB/10022msec) 00:27:04.360 slat (usec): min=8, max=104, avg=43.77, stdev=24.15 00:27:04.360 clat (usec): min=21685, max=78725, avg=33034.01, stdev=2597.41 00:27:04.360 lat (usec): min=21709, max=78747, avg=33077.78, stdev=2594.61 00:27:04.360 clat percentiles (usec): 00:27:04.360 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:27:04.360 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:27:04.360 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.360 | 99.00th=[36439], 99.50th=[42730], 99.90th=[71828], 99.95th=[71828], 00:27:04.360 | 99.99th=[79168] 00:27:04.360 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1910.80, stdev=94.02, samples=20 00:27:04.360 iops : min= 384, max= 512, avg=477.70, stdev=23.50, samples=20 00:27:04.360 lat (msec) : 50=99.67%, 100=0.33% 00:27:04.360 cpu : usr=98.10%, sys=1.50%, ctx=14, majf=0, minf=74 00:27:04.360 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:04.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 issued rwts: total=4796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.360 filename1: (groupid=0, jobs=1): err= 0: pid=2431515: Mon Jul 15 10:39:57 2024 00:27:04.360 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10012msec) 00:27:04.360 slat (usec): min=11, max=122, avg=56.48, stdev=21.25 00:27:04.360 clat (usec): min=8776, max=36963, avg=32613.86, stdev=2073.62 00:27:04.360 lat (usec): min=8788, max=37006, avg=32670.34, stdev=2076.11 00:27:04.360 clat percentiles (usec): 00:27:04.360 | 1.00th=[29492], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:27:04.360 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:27:04.360 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34866], 00:27:04.360 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:27:04.360 | 99.99th=[36963] 00:27:04.360 bw ( KiB/s): min= 1916, max= 2048, per=4.18%, avg=1926.20, stdev=28.68, samples=20 00:27:04.360 iops : min= 479, max= 512, avg=481.55, stdev= 7.17, samples=20 00:27:04.360 lat (msec) : 10=0.33%, 20=0.33%, 50=99.34% 00:27:04.360 cpu : usr=98.37%, sys=1.18%, ctx=15, majf=0, minf=69 00:27:04.360 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.360 filename2: (groupid=0, jobs=1): err= 0: pid=2431516: Mon Jul 15 10:39:57 2024 00:27:04.360 read: IOPS=481, BW=1925KiB/s (1972kB/s)(18.8MiB/10005msec) 00:27:04.360 slat (usec): min=6, max=133, avg=49.69, stdev=29.02 00:27:04.360 clat (usec): min=12882, max=37868, avg=32817.35, stdev=1528.34 00:27:04.360 lat (usec): min=12891, max=37921, avg=32867.04, stdev=1526.19 00:27:04.360 clat percentiles (usec): 00:27:04.360 | 1.00th=[30016], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:27:04.360 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:27:04.360 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.360 | 99.00th=[35914], 99.50th=[36439], 99.90th=[37487], 99.95th=[38011], 00:27:04.360 | 99.99th=[38011] 00:27:04.360 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1919.95, stdev=42.68, samples=19 00:27:04.360 iops : min= 448, max= 512, avg=479.95, stdev=10.67, samples=19 00:27:04.360 lat (msec) : 20=0.33%, 50=99.67% 00:27:04.360 cpu : usr=97.79%, sys=1.78%, ctx=23, majf=0, minf=56 00:27:04.360 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:04.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.360 filename2: (groupid=0, jobs=1): err= 0: pid=2431517: Mon Jul 15 10:39:57 2024 00:27:04.360 read: IOPS=483, BW=1932KiB/s (1978kB/s)(18.9MiB/10004msec) 00:27:04.360 slat (usec): min=8, max=129, avg=22.47, stdev=21.73 00:27:04.360 clat (usec): min=8740, max=37036, avg=32916.27, stdev=2114.89 00:27:04.360 lat (usec): min=8755, max=37071, avg=32938.74, stdev=2114.77 00:27:04.360 clat percentiles (usec): 00:27:04.360 | 1.00th=[25822], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:27:04.360 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:04.360 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.360 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:27:04.360 | 99.99th=[36963] 00:27:04.360 bw ( KiB/s): min= 1916, max= 2048, per=4.18%, avg=1926.53, stdev=29.43, samples=19 00:27:04.360 iops : min= 479, max= 512, avg=481.63, stdev= 7.36, samples=19 00:27:04.360 lat (msec) : 10=0.33%, 20=0.33%, 50=99.34% 00:27:04.360 cpu : usr=97.94%, sys=1.66%, ctx=15, majf=0, minf=75 00:27:04.360 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.360 filename2: (groupid=0, jobs=1): err= 0: pid=2431518: Mon Jul 15 10:39:57 2024 00:27:04.360 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10008msec) 00:27:04.360 slat (usec): min=9, max=102, avg=42.91, stdev=12.71 00:27:04.360 clat (usec): min=14740, max=70354, avg=32941.25, stdev=2158.13 00:27:04.360 lat (usec): min=14780, max=70386, avg=32984.16, stdev=2157.70 00:27:04.360 clat percentiles (usec): 00:27:04.360 | 1.00th=[27919], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:27:04.360 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:27:04.360 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:27:04.360 | 99.00th=[36439], 99.50th=[37487], 99.90th=[57934], 99.95th=[57934], 00:27:04.360 | 99.99th=[70779] 00:27:04.360 bw ( KiB/s): min= 1715, max= 2048, per=4.16%, avg=1916.15, stdev=69.16, samples=20 00:27:04.360 iops : min= 428, max= 512, avg=479.00, stdev=17.41, samples=20 00:27:04.360 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:27:04.360 cpu : usr=98.33%, sys=1.29%, ctx=14, majf=0, minf=41 00:27:04.360 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:04.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.360 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.360 filename2: (groupid=0, jobs=1): err= 0: pid=2431519: Mon Jul 15 10:39:57 2024 00:27:04.360 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:27:04.360 slat (usec): min=13, max=131, avg=56.58, stdev=18.56 00:27:04.360 clat (usec): min=14730, max=59338, avg=32833.36, stdev=2028.78 00:27:04.360 lat (usec): min=14747, max=59373, avg=32889.94, stdev=2026.90 00:27:04.360 clat percentiles (usec): 00:27:04.360 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:27:04.360 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:27:04.360 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34866], 00:27:04.360 | 99.00th=[36439], 99.50th=[36439], 99.90th=[58983], 99.95th=[59507], 00:27:04.360 | 99.99th=[59507] 00:27:04.360 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1913.21, stdev=78.99, samples=19 00:27:04.360 iops : min= 416, max= 512, avg=478.26, stdev=19.88, samples=19 00:27:04.360 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:27:04.360 cpu : usr=97.94%, sys=1.65%, ctx=17, majf=0, minf=59 00:27:04.360 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.361 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.361 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.361 filename2: (groupid=0, jobs=1): err= 0: pid=2431520: Mon Jul 15 10:39:57 2024 00:27:04.361 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:27:04.361 slat (usec): min=7, max=114, avg=41.21, stdev=28.55 00:27:04.361 clat (usec): min=2909, max=59304, avg=33192.04, stdev=2363.08 00:27:04.361 lat (usec): min=2918, max=59343, avg=33233.25, stdev=2363.29 00:27:04.361 clat percentiles (usec): 00:27:04.361 | 1.00th=[31327], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:27:04.361 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:04.361 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:27:04.361 | 99.00th=[36439], 99.50th=[52167], 99.90th=[58983], 99.95th=[59507], 00:27:04.361 | 99.99th=[59507] 00:27:04.361 bw ( KiB/s): min= 1635, max= 1968, per=4.15%, avg=1911.53, stdev=70.76, samples=19 00:27:04.361 iops : min= 408, max= 492, avg=477.84, stdev=17.85, samples=19 00:27:04.361 lat (msec) : 4=0.08%, 20=0.33%, 50=99.04%, 100=0.54% 00:27:04.361 cpu : usr=97.90%, sys=1.64%, ctx=60, majf=0, minf=85 00:27:04.361 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=80.7%, 16=18.5%, 32=0.0%, >=64=0.0% 00:27:04.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.361 complete : 0=0.0%, 4=89.5%, 8=10.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.361 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.361 filename2: (groupid=0, jobs=1): err= 0: pid=2431521: Mon Jul 15 10:39:57 2024 00:27:04.361 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10006msec) 00:27:04.361 slat (usec): min=7, max=101, avg=20.60, stdev=12.65 00:27:04.361 clat (usec): min=19583, max=58382, avg=33158.99, stdev=1887.71 00:27:04.361 lat (usec): min=19605, max=58412, avg=33179.59, stdev=1889.27 00:27:04.361 clat percentiles (usec): 00:27:04.361 | 1.00th=[30016], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:27:04.361 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:04.361 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:27:04.361 | 99.00th=[36439], 99.50th=[39584], 99.90th=[58459], 99.95th=[58459], 00:27:04.361 | 99.99th=[58459] 00:27:04.361 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1913.42, stdev=66.49, samples=19 00:27:04.361 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:27:04.361 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:27:04.361 cpu : usr=95.06%, sys=2.85%, ctx=198, majf=0, minf=86 00:27:04.361 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:04.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.361 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.361 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.361 filename2: (groupid=0, jobs=1): err= 0: pid=2431522: Mon Jul 15 10:39:57 2024 00:27:04.361 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10012msec) 00:27:04.361 slat (usec): min=9, max=135, avg=55.04, stdev=19.27 00:27:04.361 clat (usec): min=25167, max=48092, avg=32886.07, stdev=1256.39 00:27:04.361 lat (usec): min=25209, max=48132, avg=32941.11, stdev=1255.11 00:27:04.361 clat percentiles (usec): 00:27:04.361 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:27:04.361 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:27:04.361 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.361 | 99.00th=[36439], 99.50th=[36963], 99.90th=[47973], 99.95th=[47973], 00:27:04.361 | 99.99th=[47973] 00:27:04.361 bw ( KiB/s): min= 1664, max= 2043, per=4.15%, avg=1913.35, stdev=64.80, samples=20 00:27:04.361 iops : min= 416, max= 510, avg=478.30, stdev=16.12, samples=20 00:27:04.361 lat (msec) : 50=100.00% 00:27:04.361 cpu : usr=94.06%, sys=3.12%, ctx=124, majf=0, minf=49 00:27:04.361 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:04.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.361 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.361 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.361 filename2: (groupid=0, jobs=1): err= 0: pid=2431523: Mon Jul 15 10:39:57 2024 00:27:04.361 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10012msec) 00:27:04.361 slat (nsec): min=8211, max=99054, avg=38301.77, stdev=13707.10 00:27:04.361 clat (usec): min=25081, max=47994, avg=33060.11, stdev=1196.69 00:27:04.361 lat (usec): min=25120, max=48024, avg=33098.41, stdev=1194.97 00:27:04.361 clat percentiles (usec): 00:27:04.361 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:27:04.361 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:27:04.361 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:27:04.361 | 99.00th=[36439], 99.50th=[36439], 99.90th=[47973], 99.95th=[47973], 00:27:04.361 | 99.99th=[47973] 00:27:04.361 bw ( KiB/s): min= 1664, max= 2043, per=4.15%, avg=1913.35, stdev=64.80, samples=20 00:27:04.361 iops : min= 416, max= 510, avg=478.30, stdev=16.12, samples=20 00:27:04.361 lat (msec) : 50=100.00% 00:27:04.361 cpu : usr=98.23%, sys=1.37%, ctx=13, majf=0, minf=41 00:27:04.361 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:04.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.361 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.361 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.361 00:27:04.361 Run status group 0 (all jobs): 00:27:04.361 READ: bw=45.0MiB/s (47.1MB/s), 1914KiB/s-1932KiB/s (1960kB/s-1978kB/s), io=451MiB (473MB), run=10003-10022msec 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.361 bdev_null0 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.361 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.361 [2024-07-15 10:39:57.870517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.362 bdev_null1 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.362 { 00:27:04.362 "params": { 00:27:04.362 "name": "Nvme$subsystem", 00:27:04.362 "trtype": "$TEST_TRANSPORT", 00:27:04.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.362 "adrfam": "ipv4", 00:27:04.362 "trsvcid": "$NVMF_PORT", 00:27:04.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.362 "hdgst": ${hdgst:-false}, 00:27:04.362 "ddgst": ${ddgst:-false} 00:27:04.362 }, 00:27:04.362 "method": "bdev_nvme_attach_controller" 00:27:04.362 } 00:27:04.362 EOF 00:27:04.362 )") 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.362 { 00:27:04.362 "params": { 00:27:04.362 "name": "Nvme$subsystem", 00:27:04.362 "trtype": "$TEST_TRANSPORT", 00:27:04.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.362 "adrfam": "ipv4", 00:27:04.362 "trsvcid": "$NVMF_PORT", 00:27:04.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.362 "hdgst": ${hdgst:-false}, 00:27:04.362 "ddgst": ${ddgst:-false} 00:27:04.362 }, 00:27:04.362 "method": "bdev_nvme_attach_controller" 00:27:04.362 } 00:27:04.362 EOF 00:27:04.362 )") 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:04.362 "params": { 00:27:04.362 "name": "Nvme0", 00:27:04.362 "trtype": "tcp", 00:27:04.362 "traddr": "10.0.0.2", 00:27:04.362 "adrfam": "ipv4", 00:27:04.362 "trsvcid": "4420", 00:27:04.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:04.362 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:04.362 "hdgst": false, 00:27:04.362 "ddgst": false 00:27:04.362 }, 00:27:04.362 "method": "bdev_nvme_attach_controller" 00:27:04.362 },{ 00:27:04.362 "params": { 00:27:04.362 "name": "Nvme1", 00:27:04.362 "trtype": "tcp", 00:27:04.362 "traddr": "10.0.0.2", 00:27:04.362 "adrfam": "ipv4", 00:27:04.362 "trsvcid": "4420", 00:27:04.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:04.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:04.362 "hdgst": false, 00:27:04.362 "ddgst": false 00:27:04.362 }, 00:27:04.362 "method": "bdev_nvme_attach_controller" 00:27:04.362 }' 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:04.362 10:39:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.362 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:04.362 ... 00:27:04.362 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:04.362 ... 00:27:04.362 fio-3.35 00:27:04.362 Starting 4 threads 00:27:04.362 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.622 00:27:09.622 filename0: (groupid=0, jobs=1): err= 0: pid=2432905: Mon Jul 15 10:40:03 2024 00:27:09.622 read: IOPS=1926, BW=15.1MiB/s (15.8MB/s)(75.3MiB/5003msec) 00:27:09.622 slat (nsec): min=4357, max=55738, avg=12776.22, stdev=5801.13 00:27:09.622 clat (usec): min=1018, max=7364, avg=4111.13, stdev=656.20 00:27:09.622 lat (usec): min=1036, max=7380, avg=4123.90, stdev=656.25 00:27:09.622 clat percentiles (usec): 00:27:09.622 | 1.00th=[ 2835], 5.00th=[ 3228], 10.00th=[ 3490], 20.00th=[ 3720], 00:27:09.622 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4113], 00:27:09.622 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4883], 95.00th=[ 5669], 00:27:09.622 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7111], 99.95th=[ 7308], 00:27:09.622 | 99.99th=[ 7373] 00:27:09.622 bw ( KiB/s): min=14928, max=16176, per=25.26%, avg=15411.20, stdev=398.67, samples=10 00:27:09.622 iops : min= 1866, max= 2022, avg=1926.40, stdev=49.83, samples=10 00:27:09.622 lat (msec) : 2=0.06%, 4=44.84%, 10=55.10% 00:27:09.622 cpu : usr=94.64%, sys=4.84%, ctx=11, majf=0, minf=32 00:27:09.622 IO depths : 1=0.1%, 2=6.4%, 4=66.2%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:09.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.622 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.622 issued rwts: total=9639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.622 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:09.622 filename0: (groupid=0, jobs=1): err= 0: pid=2432906: Mon Jul 15 10:40:03 2024 00:27:09.622 read: IOPS=1894, BW=14.8MiB/s (15.5MB/s)(74.0MiB/5002msec) 00:27:09.622 slat (nsec): min=4734, max=75690, avg=15668.59, stdev=7230.25 00:27:09.622 clat (usec): min=823, max=8327, avg=4171.28, stdev=712.05 00:27:09.622 lat (usec): min=842, max=8340, avg=4186.95, stdev=711.21 00:27:09.622 clat percentiles (usec): 00:27:09.622 | 1.00th=[ 2933], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3720], 00:27:09.622 | 30.00th=[ 3851], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4113], 00:27:09.622 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 5538], 95.00th=[ 5866], 00:27:09.622 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7242], 99.95th=[ 8160], 00:27:09.622 | 99.99th=[ 8356] 00:27:09.622 bw ( KiB/s): min=14656, max=15744, per=24.83%, avg=15154.90, stdev=308.87, samples=10 00:27:09.622 iops : min= 1832, max= 1968, avg=1894.30, stdev=38.65, samples=10 00:27:09.622 lat (usec) : 1000=0.02% 00:27:09.622 lat (msec) : 2=0.04%, 4=42.13%, 10=57.81% 00:27:09.622 cpu : usr=94.20%, sys=4.86%, ctx=20, majf=0, minf=60 00:27:09.622 IO depths : 1=0.1%, 2=4.6%, 4=67.9%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:09.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.622 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.622 issued rwts: total=9478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.622 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:09.622 filename1: (groupid=0, jobs=1): err= 0: pid=2432907: Mon Jul 15 10:40:03 2024 00:27:09.622 read: IOPS=1925, BW=15.0MiB/s (15.8MB/s)(75.3MiB/5002msec) 00:27:09.622 slat (nsec): min=4316, max=60936, avg=16442.32, stdev=7751.34 00:27:09.622 clat (usec): min=1140, max=7324, avg=4100.53, stdev=624.69 00:27:09.622 lat (usec): min=1159, max=7339, avg=4116.98, stdev=624.26 00:27:09.622 clat percentiles (usec): 00:27:09.622 | 1.00th=[ 2835], 5.00th=[ 3359], 10.00th=[ 3556], 20.00th=[ 3752], 00:27:09.622 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 4015], 60.00th=[ 4080], 00:27:09.622 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4883], 95.00th=[ 5669], 00:27:09.622 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 6915], 99.95th=[ 7111], 00:27:09.622 | 99.99th=[ 7308] 00:27:09.622 bw ( KiB/s): min=14960, max=16032, per=25.25%, avg=15407.90, stdev=317.54, samples=10 00:27:09.622 iops : min= 1870, max= 2004, avg=1925.90, stdev=39.65, samples=10 00:27:09.622 lat (msec) : 2=0.07%, 4=47.92%, 10=52.01% 00:27:09.622 cpu : usr=90.56%, sys=6.92%, ctx=268, majf=0, minf=61 00:27:09.622 IO depths : 1=0.2%, 2=6.1%, 4=66.5%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:09.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.622 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.622 issued rwts: total=9633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.622 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:09.622 filename1: (groupid=0, jobs=1): err= 0: pid=2432908: Mon Jul 15 10:40:03 2024 00:27:09.622 read: IOPS=1881, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5003msec) 00:27:09.622 slat (nsec): min=3933, max=89189, avg=14878.19, stdev=7503.47 00:27:09.622 clat (usec): min=735, max=7643, avg=4206.32, stdev=676.60 00:27:09.622 lat (usec): min=749, max=7669, avg=4221.20, stdev=676.46 00:27:09.622 clat percentiles (usec): 00:27:09.622 | 1.00th=[ 2802], 5.00th=[ 3425], 10.00th=[ 3621], 20.00th=[ 3818], 00:27:09.622 | 30.00th=[ 3884], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:27:09.622 | 70.00th=[ 4228], 80.00th=[ 4490], 90.00th=[ 5080], 95.00th=[ 5735], 00:27:09.622 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7308], 99.95th=[ 7439], 00:27:09.622 | 99.99th=[ 7635] 00:27:09.622 bw ( KiB/s): min=14560, max=15376, per=24.65%, avg=15044.80, stdev=254.78, samples=10 00:27:09.622 iops : min= 1820, max= 1922, avg=1880.60, stdev=31.85, samples=10 00:27:09.622 lat (usec) : 750=0.01%, 1000=0.04% 00:27:09.622 lat (msec) : 2=0.21%, 4=38.83%, 10=60.91% 00:27:09.622 cpu : usr=94.84%, sys=4.30%, ctx=60, majf=0, minf=32 00:27:09.622 IO depths : 1=0.1%, 2=5.4%, 4=66.1%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:09.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.622 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.622 issued rwts: total=9411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.622 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:09.622 00:27:09.622 Run status group 0 (all jobs): 00:27:09.622 READ: bw=59.6MiB/s (62.5MB/s), 14.7MiB/s-15.1MiB/s (15.4MB/s-15.8MB/s), io=298MiB (313MB), run=5002-5003msec 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.622 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.880 00:27:09.880 real 0m24.487s 00:27:09.880 user 4m29.870s 00:27:09.880 sys 0m8.110s 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:09.880 10:40:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:09.880 ************************************ 00:27:09.880 END TEST fio_dif_rand_params 00:27:09.880 ************************************ 00:27:09.880 10:40:04 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:09.880 10:40:04 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:09.880 10:40:04 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:09.880 10:40:04 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.880 10:40:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:09.880 ************************************ 00:27:09.880 START TEST fio_dif_digest 00:27:09.880 ************************************ 00:27:09.880 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:27:09.880 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:09.880 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:09.880 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:09.880 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.881 bdev_null0 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.881 [2024-07-15 10:40:04.366719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:09.881 { 00:27:09.881 "params": { 00:27:09.881 "name": "Nvme$subsystem", 00:27:09.881 "trtype": "$TEST_TRANSPORT", 00:27:09.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.881 "adrfam": "ipv4", 00:27:09.881 "trsvcid": "$NVMF_PORT", 00:27:09.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.881 "hdgst": ${hdgst:-false}, 00:27:09.881 "ddgst": ${ddgst:-false} 00:27:09.881 }, 00:27:09.881 "method": "bdev_nvme_attach_controller" 00:27:09.881 } 00:27:09.881 EOF 00:27:09.881 )") 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:09.881 "params": { 00:27:09.881 "name": "Nvme0", 00:27:09.881 "trtype": "tcp", 00:27:09.881 "traddr": "10.0.0.2", 00:27:09.881 "adrfam": "ipv4", 00:27:09.881 "trsvcid": "4420", 00:27:09.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:09.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:09.881 "hdgst": true, 00:27:09.881 "ddgst": true 00:27:09.881 }, 00:27:09.881 "method": "bdev_nvme_attach_controller" 00:27:09.881 }' 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:09.881 10:40:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.139 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:10.139 ... 00:27:10.139 fio-3.35 00:27:10.139 Starting 3 threads 00:27:10.139 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.333 00:27:22.333 filename0: (groupid=0, jobs=1): err= 0: pid=2433676: Mon Jul 15 10:40:15 2024 00:27:22.333 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(270MiB/10046msec) 00:27:22.333 slat (nsec): min=4708, max=34696, avg=14075.32, stdev=1866.83 00:27:22.333 clat (usec): min=10302, max=55407, avg=13908.85, stdev=1502.36 00:27:22.333 lat (usec): min=10315, max=55423, avg=13922.92, stdev=1502.44 00:27:22.333 clat percentiles (usec): 00:27:22.333 | 1.00th=[11469], 5.00th=[12387], 10.00th=[12649], 20.00th=[13042], 00:27:22.333 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:27:22.333 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:27:22.333 | 99.00th=[16450], 99.50th=[16712], 99.90th=[20579], 99.95th=[47449], 00:27:22.333 | 99.99th=[55313] 00:27:22.333 bw ( KiB/s): min=26368, max=28416, per=34.33%, avg=27635.20, stdev=487.67, samples=20 00:27:22.333 iops : min= 206, max= 222, avg=215.90, stdev= 3.81, samples=20 00:27:22.333 lat (msec) : 20=99.86%, 50=0.09%, 100=0.05% 00:27:22.333 cpu : usr=92.75%, sys=6.60%, ctx=24, majf=0, minf=136 00:27:22.333 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.333 issued rwts: total=2161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.333 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:22.333 filename0: (groupid=0, jobs=1): err= 0: pid=2433677: Mon Jul 15 10:40:15 2024 00:27:22.333 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10045msec) 00:27:22.333 slat (nsec): min=4560, max=22507, avg=13709.44, stdev=1514.39 00:27:22.333 clat (usec): min=11363, max=52519, avg=15089.75, stdev=1618.95 00:27:22.333 lat (usec): min=11376, max=52538, avg=15103.45, stdev=1619.05 00:27:22.333 clat percentiles (usec): 00:27:22.333 | 1.00th=[12649], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:27:22.333 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15008], 60.00th=[15270], 00:27:22.333 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16581], 95.00th=[16909], 00:27:22.333 | 99.00th=[17957], 99.50th=[18744], 99.90th=[49546], 99.95th=[52691], 00:27:22.333 | 99.99th=[52691] 00:27:22.333 bw ( KiB/s): min=24320, max=26112, per=31.64%, avg=25474.45, stdev=453.77, samples=20 00:27:22.333 iops : min= 190, max= 204, avg=199.00, stdev= 3.58, samples=20 00:27:22.333 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:27:22.333 cpu : usr=92.89%, sys=6.41%, ctx=28, majf=0, minf=87 00:27:22.333 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.333 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.333 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:22.333 filename0: (groupid=0, jobs=1): err= 0: pid=2433678: Mon Jul 15 10:40:15 2024 00:27:22.333 read: IOPS=215, BW=26.9MiB/s (28.3MB/s)(271MiB/10047msec) 00:27:22.333 slat (nsec): min=4338, max=46385, avg=14113.76, stdev=1722.32 00:27:22.333 clat (usec): min=10845, max=52201, avg=13878.29, stdev=1484.81 00:27:22.333 lat (usec): min=10859, max=52223, avg=13892.40, stdev=1484.93 00:27:22.333 clat percentiles (usec): 00:27:22.333 | 1.00th=[11469], 5.00th=[12256], 10.00th=[12649], 20.00th=[13042], 00:27:22.333 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:27:22.333 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:27:22.333 | 99.00th=[16319], 99.50th=[16581], 99.90th=[22938], 99.95th=[48497], 00:27:22.333 | 99.99th=[52167] 00:27:22.333 bw ( KiB/s): min=25856, max=28928, per=34.41%, avg=27699.20, stdev=630.36, samples=20 00:27:22.333 iops : min= 202, max= 226, avg=216.40, stdev= 4.92, samples=20 00:27:22.333 lat (msec) : 20=99.77%, 50=0.18%, 100=0.05% 00:27:22.333 cpu : usr=92.38%, sys=7.16%, ctx=20, majf=0, minf=126 00:27:22.333 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.333 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.333 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:22.333 00:27:22.333 Run status group 0 (all jobs): 00:27:22.333 READ: bw=78.6MiB/s (82.4MB/s), 24.8MiB/s-26.9MiB/s (26.0MB/s-28.3MB/s), io=790MiB (828MB), run=10045-10047msec 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.333 00:27:22.333 real 0m11.152s 00:27:22.333 user 0m29.067s 00:27:22.333 sys 0m2.280s 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.333 10:40:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:22.333 ************************************ 00:27:22.333 END TEST fio_dif_digest 00:27:22.333 ************************************ 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:22.333 10:40:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:22.333 10:40:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.333 rmmod nvme_tcp 00:27:22.333 rmmod nvme_fabrics 00:27:22.333 rmmod nvme_keyring 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2427608 ']' 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2427608 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2427608 ']' 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2427608 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2427608 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2427608' 00:27:22.333 killing process with pid 2427608 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2427608 00:27:22.333 10:40:15 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2427608 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:22.333 10:40:15 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:22.333 Waiting for block devices as requested 00:27:22.333 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:22.629 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:22.629 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:22.888 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:22.888 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:22.888 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:22.888 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:23.147 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:23.147 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:23.147 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:23.147 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:23.405 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:23.405 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:23.405 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:23.405 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:23.405 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:23.661 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:23.661 10:40:18 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:23.661 10:40:18 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:23.661 10:40:18 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.661 10:40:18 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.661 10:40:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.661 10:40:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:23.661 10:40:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.186 10:40:20 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:26.186 00:27:26.186 real 1m6.813s 00:27:26.186 user 6m26.191s 00:27:26.186 sys 0m19.899s 00:27:26.186 10:40:20 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:26.186 10:40:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:26.186 ************************************ 00:27:26.186 END TEST nvmf_dif 00:27:26.186 ************************************ 00:27:26.186 10:40:20 -- common/autotest_common.sh@1142 -- # return 0 00:27:26.186 10:40:20 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:26.186 10:40:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:26.186 10:40:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.186 10:40:20 -- common/autotest_common.sh@10 -- # set +x 00:27:26.186 ************************************ 00:27:26.186 START TEST nvmf_abort_qd_sizes 00:27:26.186 ************************************ 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:26.186 * Looking for test storage... 00:27:26.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.186 10:40:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:26.187 10:40:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:28.082 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:28.082 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:28.082 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:28.082 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:28.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:27:28.082 00:27:28.082 --- 10.0.0.2 ping statistics --- 00:27:28.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.082 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:27:28.082 00:27:28.082 --- 10.0.0.1 ping statistics --- 00:27:28.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.082 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:28.082 10:40:22 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:29.014 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:29.014 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:29.014 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:29.014 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:29.014 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:29.014 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:29.014 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:29.014 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:29.014 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:29.014 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:29.014 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:29.014 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:29.014 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:29.014 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:29.014 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:29.014 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:29.947 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2438466 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2438466 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2438466 ']' 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.206 10:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:30.206 [2024-07-15 10:40:24.684327] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:30.206 [2024-07-15 10:40:24.684416] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.206 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.206 [2024-07-15 10:40:24.751758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.464 [2024-07-15 10:40:24.875513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.464 [2024-07-15 10:40:24.875571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.464 [2024-07-15 10:40:24.875587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.464 [2024-07-15 10:40:24.875600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.464 [2024-07-15 10:40:24.875611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.464 [2024-07-15 10:40:24.876903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.464 [2024-07-15 10:40:24.876954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.464 [2024-07-15 10:40:24.877045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.464 [2024-07-15 10:40:24.877049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.464 10:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:30.464 ************************************ 00:27:30.464 START TEST spdk_target_abort 00:27:30.464 ************************************ 00:27:30.464 10:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:30.464 10:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:30.464 10:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:27:30.464 10:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.464 10:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.744 spdk_targetn1 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.744 [2024-07-15 10:40:27.899954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.744 [2024-07-15 10:40:27.932247] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:33.744 10:40:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:33.744 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.023 Initializing NVMe Controllers 00:27:37.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:37.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:37.023 Initialization complete. Launching workers. 00:27:37.023 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11365, failed: 0 00:27:37.023 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1278, failed to submit 10087 00:27:37.023 success 832, unsuccess 446, failed 0 00:27:37.023 10:40:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:37.023 10:40:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:37.023 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.297 Initializing NVMe Controllers 00:27:40.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:40.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:40.297 Initialization complete. Launching workers. 00:27:40.297 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8597, failed: 0 00:27:40.297 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 7354 00:27:40.297 success 328, unsuccess 915, failed 0 00:27:40.297 10:40:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:40.297 10:40:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:40.297 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.603 Initializing NVMe Controllers 00:27:43.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:43.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:43.603 Initialization complete. Launching workers. 00:27:43.603 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31888, failed: 0 00:27:43.603 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2717, failed to submit 29171 00:27:43.603 success 534, unsuccess 2183, failed 0 00:27:43.603 10:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:43.603 10:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.603 10:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:43.603 10:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.603 10:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:43.603 10:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.603 10:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2438466 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2438466 ']' 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2438466 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2438466 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2438466' 00:27:44.535 killing process with pid 2438466 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2438466 00:27:44.535 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2438466 00:27:44.794 00:27:44.794 real 0m14.240s 00:27:44.794 user 0m53.052s 00:27:44.794 sys 0m2.928s 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.794 ************************************ 00:27:44.794 END TEST spdk_target_abort 00:27:44.794 ************************************ 00:27:44.794 10:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:44.794 10:40:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:44.794 10:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:44.794 10:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.794 10:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:44.794 ************************************ 00:27:44.794 START TEST kernel_target_abort 00:27:44.794 ************************************ 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:44.794 10:40:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:46.170 Waiting for block devices as requested 00:27:46.170 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:46.170 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:46.170 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:46.170 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:46.428 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:46.428 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:46.428 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:46.428 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:46.686 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:46.686 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:46.686 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:46.686 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:46.945 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:46.946 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:46.946 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:46.946 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:47.205 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:47.205 No valid GPT data, bailing 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:47.205 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:47.463 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:47.463 00:27:47.463 Discovery Log Number of Records 2, Generation counter 2 00:27:47.463 =====Discovery Log Entry 0====== 00:27:47.463 trtype: tcp 00:27:47.463 adrfam: ipv4 00:27:47.463 subtype: current discovery subsystem 00:27:47.463 treq: not specified, sq flow control disable supported 00:27:47.464 portid: 1 00:27:47.464 trsvcid: 4420 00:27:47.464 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:47.464 traddr: 10.0.0.1 00:27:47.464 eflags: none 00:27:47.464 sectype: none 00:27:47.464 =====Discovery Log Entry 1====== 00:27:47.464 trtype: tcp 00:27:47.464 adrfam: ipv4 00:27:47.464 subtype: nvme subsystem 00:27:47.464 treq: not specified, sq flow control disable supported 00:27:47.464 portid: 1 00:27:47.464 trsvcid: 4420 00:27:47.464 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:47.464 traddr: 10.0.0.1 00:27:47.464 eflags: none 00:27:47.464 sectype: none 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:47.464 10:40:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:47.464 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.742 Initializing NVMe Controllers 00:27:50.742 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:50.742 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:50.742 Initialization complete. Launching workers. 00:27:50.742 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33023, failed: 0 00:27:50.742 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33023, failed to submit 0 00:27:50.742 success 0, unsuccess 33023, failed 0 00:27:50.742 10:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:50.742 10:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:50.742 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.021 Initializing NVMe Controllers 00:27:54.021 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:54.021 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:54.021 Initialization complete. Launching workers. 00:27:54.021 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64677, failed: 0 00:27:54.021 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16310, failed to submit 48367 00:27:54.021 success 0, unsuccess 16310, failed 0 00:27:54.021 10:40:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:54.021 10:40:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:54.021 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.293 Initializing NVMe Controllers 00:27:57.293 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:57.293 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:57.293 Initialization complete. Launching workers. 00:27:57.293 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62904, failed: 0 00:27:57.293 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15718, failed to submit 47186 00:27:57.293 success 0, unsuccess 15718, failed 0 00:27:57.293 10:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:57.293 10:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:57.293 10:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:57.293 10:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:57.293 10:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:57.293 10:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:57.293 10:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:57.293 10:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:57.293 10:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:57.293 10:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:57.860 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:57.860 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:57.860 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:57.860 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:57.860 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:58.119 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:58.119 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:58.119 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:58.119 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:58.119 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:58.119 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:58.119 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:58.119 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:58.119 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:58.119 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:58.119 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:59.068 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:59.068 00:27:59.068 real 0m14.339s 00:27:59.068 user 0m5.260s 00:27:59.068 sys 0m3.403s 00:27:59.068 10:40:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:59.068 10:40:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:59.068 ************************************ 00:27:59.068 END TEST kernel_target_abort 00:27:59.068 ************************************ 00:27:59.068 10:40:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:59.068 10:40:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:59.068 10:40:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:59.068 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:59.068 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:59.068 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:59.068 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:59.068 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:59.068 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:59.068 rmmod nvme_tcp 00:27:59.325 rmmod nvme_fabrics 00:27:59.325 rmmod nvme_keyring 00:27:59.325 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:59.325 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:59.325 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:59.325 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2438466 ']' 00:27:59.325 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2438466 00:27:59.325 10:40:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2438466 ']' 00:27:59.325 10:40:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2438466 00:27:59.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2438466) - No such process 00:27:59.325 10:40:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2438466 is not found' 00:27:59.325 Process with pid 2438466 is not found 00:27:59.325 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:59.326 10:40:53 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:00.260 Waiting for block devices as requested 00:28:00.260 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:00.518 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:00.518 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:00.775 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:00.775 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:00.775 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:00.775 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:00.775 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:01.043 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:01.043 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:01.043 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:01.370 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:01.370 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:01.370 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:01.370 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:01.370 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:01.628 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:01.628 10:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:01.628 10:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:01.628 10:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.628 10:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:01.628 10:40:56 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.628 10:40:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:01.628 10:40:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.156 10:40:58 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:04.156 00:28:04.156 real 0m37.893s 00:28:04.156 user 1m0.428s 00:28:04.156 sys 0m9.576s 00:28:04.156 10:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:04.156 10:40:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:04.156 ************************************ 00:28:04.156 END TEST nvmf_abort_qd_sizes 00:28:04.156 ************************************ 00:28:04.156 10:40:58 -- common/autotest_common.sh@1142 -- # return 0 00:28:04.156 10:40:58 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:04.156 10:40:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:04.156 10:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.156 10:40:58 -- common/autotest_common.sh@10 -- # set +x 00:28:04.156 ************************************ 00:28:04.156 START TEST keyring_file 00:28:04.156 ************************************ 00:28:04.156 10:40:58 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:04.156 * Looking for test storage... 00:28:04.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:04.156 10:40:58 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:04.156 10:40:58 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.156 10:40:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.157 10:40:58 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.157 10:40:58 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.157 10:40:58 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.157 10:40:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.157 10:40:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.157 10:40:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.157 10:40:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:04.157 10:40:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4XKQzChSlB 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4XKQzChSlB 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4XKQzChSlB 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.4XKQzChSlB 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JstJQ68r7E 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:04.157 10:40:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JstJQ68r7E 00:28:04.157 10:40:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JstJQ68r7E 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.JstJQ68r7E 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=2444228 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2444228 00:28:04.157 10:40:58 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:04.157 10:40:58 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2444228 ']' 00:28:04.157 10:40:58 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.157 10:40:58 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.157 10:40:58 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.157 10:40:58 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.157 10:40:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:04.157 [2024-07-15 10:40:58.446816] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:04.157 [2024-07-15 10:40:58.446942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444228 ] 00:28:04.157 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.157 [2024-07-15 10:40:58.504850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.157 [2024-07-15 10:40:58.622572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:04.416 10:40:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:04.416 [2024-07-15 10:40:58.873578] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.416 null0 00:28:04.416 [2024-07-15 10:40:58.905623] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:04.416 [2024-07-15 10:40:58.906100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:04.416 [2024-07-15 10:40:58.913638] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.416 10:40:58 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:04.416 [2024-07-15 10:40:58.925655] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:04.416 request: 00:28:04.416 { 00:28:04.416 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:04.416 "secure_channel": false, 00:28:04.416 "listen_address": { 00:28:04.416 "trtype": "tcp", 00:28:04.416 "traddr": "127.0.0.1", 00:28:04.416 "trsvcid": "4420" 00:28:04.416 }, 00:28:04.416 "method": "nvmf_subsystem_add_listener", 00:28:04.416 "req_id": 1 00:28:04.416 } 00:28:04.416 Got JSON-RPC error response 00:28:04.416 response: 00:28:04.416 { 00:28:04.416 "code": -32602, 00:28:04.416 "message": "Invalid parameters" 00:28:04.416 } 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:04.416 10:40:58 keyring_file -- keyring/file.sh@46 -- # bperfpid=2444238 00:28:04.416 10:40:58 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2444238 /var/tmp/bperf.sock 00:28:04.416 10:40:58 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2444238 ']' 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:04.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.416 10:40:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:04.416 [2024-07-15 10:40:58.973752] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:04.416 [2024-07-15 10:40:58.973838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444238 ] 00:28:04.416 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.416 [2024-07-15 10:40:59.041391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.675 [2024-07-15 10:40:59.177336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.609 10:40:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.609 10:40:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:05.609 10:40:59 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4XKQzChSlB 00:28:05.609 10:40:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4XKQzChSlB 00:28:05.609 10:41:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JstJQ68r7E 00:28:05.609 10:41:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JstJQ68r7E 00:28:05.868 10:41:00 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:05.868 10:41:00 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:05.868 10:41:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:05.868 10:41:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:05.868 10:41:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:06.125 10:41:00 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.4XKQzChSlB == \/\t\m\p\/\t\m\p\.\4\X\K\Q\z\C\h\S\l\B ]] 00:28:06.125 10:41:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:06.125 10:41:00 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:06.125 10:41:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:06.125 10:41:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:06.125 10:41:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:06.382 10:41:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.JstJQ68r7E == \/\t\m\p\/\t\m\p\.\J\s\t\J\Q\6\8\r\7\E ]] 00:28:06.382 10:41:00 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:06.382 10:41:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:06.382 10:41:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:06.382 10:41:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:06.382 10:41:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:06.382 10:41:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:06.639 10:41:01 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:06.639 10:41:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:06.639 10:41:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:06.639 10:41:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:06.639 10:41:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:06.639 10:41:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:06.639 10:41:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:06.897 10:41:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:06.897 10:41:01 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:06.897 10:41:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:07.155 [2024-07-15 10:41:01.640330] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:07.155 nvme0n1 00:28:07.155 10:41:01 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:07.155 10:41:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:07.155 10:41:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:07.155 10:41:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:07.155 10:41:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:07.155 10:41:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:07.413 10:41:01 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:07.413 10:41:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:07.413 10:41:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:07.413 10:41:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:07.413 10:41:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:07.413 10:41:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:07.413 10:41:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:07.670 10:41:02 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:07.670 10:41:02 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.928 Running I/O for 1 seconds... 00:28:08.860 00:28:08.860 Latency(us) 00:28:08.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.860 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:08.860 nvme0n1 : 1.01 4676.08 18.27 0.00 0.00 27236.66 4126.34 41166.32 00:28:08.860 =================================================================================================================== 00:28:08.860 Total : 4676.08 18.27 0.00 0.00 27236.66 4126.34 41166.32 00:28:08.860 0 00:28:08.860 10:41:03 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:08.860 10:41:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:09.118 10:41:03 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:09.118 10:41:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:09.118 10:41:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:09.118 10:41:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:09.118 10:41:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:09.118 10:41:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.374 10:41:03 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:09.374 10:41:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:09.374 10:41:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:09.374 10:41:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:09.374 10:41:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:09.374 10:41:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.374 10:41:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:09.632 10:41:04 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:09.632 10:41:04 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:09.632 10:41:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:09.632 10:41:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:09.632 10:41:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:09.632 10:41:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.632 10:41:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:09.632 10:41:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.632 10:41:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:09.632 10:41:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:09.888 [2024-07-15 10:41:04.337110] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:09.888 [2024-07-15 10:41:04.337649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3d9a0 (107): Transport endpoint is not connected 00:28:09.888 [2024-07-15 10:41:04.338636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3d9a0 (9): Bad file descriptor 00:28:09.888 [2024-07-15 10:41:04.339634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:09.888 [2024-07-15 10:41:04.339668] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:09.888 [2024-07-15 10:41:04.339684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:09.888 request: 00:28:09.888 { 00:28:09.888 "name": "nvme0", 00:28:09.888 "trtype": "tcp", 00:28:09.888 "traddr": "127.0.0.1", 00:28:09.888 "adrfam": "ipv4", 00:28:09.888 "trsvcid": "4420", 00:28:09.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:09.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:09.888 "prchk_reftag": false, 00:28:09.888 "prchk_guard": false, 00:28:09.888 "hdgst": false, 00:28:09.888 "ddgst": false, 00:28:09.888 "psk": "key1", 00:28:09.888 "method": "bdev_nvme_attach_controller", 00:28:09.888 "req_id": 1 00:28:09.888 } 00:28:09.888 Got JSON-RPC error response 00:28:09.888 response: 00:28:09.888 { 00:28:09.888 "code": -5, 00:28:09.888 "message": "Input/output error" 00:28:09.888 } 00:28:09.888 10:41:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:09.888 10:41:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:09.888 10:41:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:09.888 10:41:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:09.888 10:41:04 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:09.888 10:41:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:09.888 10:41:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:09.888 10:41:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:09.888 10:41:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.888 10:41:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:10.144 10:41:04 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:10.144 10:41:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:10.144 10:41:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:10.144 10:41:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:10.144 10:41:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:10.144 10:41:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:10.144 10:41:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:10.400 10:41:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:10.400 10:41:04 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:10.400 10:41:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:10.657 10:41:05 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:10.657 10:41:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:10.913 10:41:05 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:10.913 10:41:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:10.913 10:41:05 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:11.170 10:41:05 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:11.170 10:41:05 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.4XKQzChSlB 00:28:11.170 10:41:05 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.4XKQzChSlB 00:28:11.170 10:41:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:11.170 10:41:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.4XKQzChSlB 00:28:11.170 10:41:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:11.170 10:41:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:11.170 10:41:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:11.170 10:41:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:11.170 10:41:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4XKQzChSlB 00:28:11.170 10:41:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4XKQzChSlB 00:28:11.427 [2024-07-15 10:41:05.845540] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4XKQzChSlB': 0100660 00:28:11.427 [2024-07-15 10:41:05.845580] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:11.427 request: 00:28:11.427 { 00:28:11.427 "name": "key0", 00:28:11.427 "path": "/tmp/tmp.4XKQzChSlB", 00:28:11.427 "method": "keyring_file_add_key", 00:28:11.428 "req_id": 1 00:28:11.428 } 00:28:11.428 Got JSON-RPC error response 00:28:11.428 response: 00:28:11.428 { 00:28:11.428 "code": -1, 00:28:11.428 "message": "Operation not permitted" 00:28:11.428 } 00:28:11.428 10:41:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:11.428 10:41:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:11.428 10:41:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:11.428 10:41:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:11.428 10:41:05 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.4XKQzChSlB 00:28:11.428 10:41:05 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4XKQzChSlB 00:28:11.428 10:41:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4XKQzChSlB 00:28:11.685 10:41:06 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.4XKQzChSlB 00:28:11.685 10:41:06 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:11.685 10:41:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:11.685 10:41:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:11.685 10:41:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:11.685 10:41:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:11.685 10:41:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:11.942 10:41:06 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:11.942 10:41:06 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:11.942 10:41:06 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:11.942 10:41:06 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:11.942 10:41:06 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:11.942 10:41:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:11.942 10:41:06 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:11.942 10:41:06 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:11.942 10:41:06 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:11.942 10:41:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:12.200 [2024-07-15 10:41:06.595724] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.4XKQzChSlB': No such file or directory 00:28:12.200 [2024-07-15 10:41:06.595762] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:12.200 [2024-07-15 10:41:06.595800] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:12.200 [2024-07-15 10:41:06.595813] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:12.200 [2024-07-15 10:41:06.595825] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:12.200 request: 00:28:12.200 { 00:28:12.200 "name": "nvme0", 00:28:12.200 "trtype": "tcp", 00:28:12.200 "traddr": "127.0.0.1", 00:28:12.200 "adrfam": "ipv4", 00:28:12.200 "trsvcid": "4420", 00:28:12.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:12.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:12.200 "prchk_reftag": false, 00:28:12.200 "prchk_guard": false, 00:28:12.200 "hdgst": false, 00:28:12.200 "ddgst": false, 00:28:12.200 "psk": "key0", 00:28:12.200 "method": "bdev_nvme_attach_controller", 00:28:12.200 "req_id": 1 00:28:12.200 } 00:28:12.200 Got JSON-RPC error response 00:28:12.200 response: 00:28:12.200 { 00:28:12.200 "code": -19, 00:28:12.200 "message": "No such device" 00:28:12.200 } 00:28:12.200 10:41:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:12.200 10:41:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:12.200 10:41:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:12.200 10:41:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:12.200 10:41:06 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:12.200 10:41:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:12.458 10:41:06 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:12.458 10:41:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:12.458 10:41:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:12.458 10:41:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:12.458 10:41:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:12.458 10:41:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:12.458 10:41:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.el4EbKUv0Q 00:28:12.458 10:41:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:12.458 10:41:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:12.458 10:41:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:12.458 10:41:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:12.458 10:41:06 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:12.458 10:41:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:12.458 10:41:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:12.458 10:41:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.el4EbKUv0Q 00:28:12.458 10:41:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.el4EbKUv0Q 00:28:12.458 10:41:06 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.el4EbKUv0Q 00:28:12.458 10:41:06 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.el4EbKUv0Q 00:28:12.458 10:41:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.el4EbKUv0Q 00:28:12.715 10:41:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:12.715 10:41:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:12.973 nvme0n1 00:28:12.973 10:41:07 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:12.973 10:41:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:12.973 10:41:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:12.973 10:41:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.973 10:41:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.973 10:41:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:13.231 10:41:07 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:13.231 10:41:07 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:13.231 10:41:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:13.488 10:41:07 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:13.488 10:41:07 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:13.488 10:41:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:13.488 10:41:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:13.488 10:41:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:13.745 10:41:08 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:13.745 10:41:08 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:13.745 10:41:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:13.745 10:41:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:13.745 10:41:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:13.745 10:41:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:13.745 10:41:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:14.002 10:41:08 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:14.002 10:41:08 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:14.002 10:41:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:14.259 10:41:08 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:14.259 10:41:08 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:14.259 10:41:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:14.517 10:41:08 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:14.517 10:41:08 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.el4EbKUv0Q 00:28:14.517 10:41:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.el4EbKUv0Q 00:28:14.775 10:41:09 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.JstJQ68r7E 00:28:14.775 10:41:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.JstJQ68r7E 00:28:15.068 10:41:09 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:15.068 10:41:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:15.325 nvme0n1 00:28:15.326 10:41:09 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:15.326 10:41:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:15.583 10:41:10 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:15.583 "subsystems": [ 00:28:15.583 { 00:28:15.583 "subsystem": "keyring", 00:28:15.583 "config": [ 00:28:15.583 { 00:28:15.583 "method": "keyring_file_add_key", 00:28:15.583 "params": { 00:28:15.583 "name": "key0", 00:28:15.583 "path": "/tmp/tmp.el4EbKUv0Q" 00:28:15.583 } 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "method": "keyring_file_add_key", 00:28:15.583 "params": { 00:28:15.583 "name": "key1", 00:28:15.583 "path": "/tmp/tmp.JstJQ68r7E" 00:28:15.583 } 00:28:15.583 } 00:28:15.583 ] 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "subsystem": "iobuf", 00:28:15.583 "config": [ 00:28:15.583 { 00:28:15.583 "method": "iobuf_set_options", 00:28:15.583 "params": { 00:28:15.583 "small_pool_count": 8192, 00:28:15.583 "large_pool_count": 1024, 00:28:15.583 "small_bufsize": 8192, 00:28:15.583 "large_bufsize": 135168 00:28:15.583 } 00:28:15.583 } 00:28:15.583 ] 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "subsystem": "sock", 00:28:15.583 "config": [ 00:28:15.583 { 00:28:15.583 "method": "sock_set_default_impl", 00:28:15.583 "params": { 00:28:15.583 "impl_name": "posix" 00:28:15.583 } 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "method": "sock_impl_set_options", 00:28:15.583 "params": { 00:28:15.583 "impl_name": "ssl", 00:28:15.583 "recv_buf_size": 4096, 00:28:15.583 "send_buf_size": 4096, 00:28:15.583 "enable_recv_pipe": true, 00:28:15.583 "enable_quickack": false, 00:28:15.583 "enable_placement_id": 0, 00:28:15.583 "enable_zerocopy_send_server": true, 00:28:15.583 "enable_zerocopy_send_client": false, 00:28:15.583 "zerocopy_threshold": 0, 00:28:15.583 "tls_version": 0, 00:28:15.583 "enable_ktls": false 00:28:15.583 } 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "method": "sock_impl_set_options", 00:28:15.583 "params": { 00:28:15.583 "impl_name": "posix", 00:28:15.583 "recv_buf_size": 2097152, 00:28:15.583 "send_buf_size": 2097152, 00:28:15.583 "enable_recv_pipe": true, 00:28:15.583 "enable_quickack": false, 00:28:15.583 "enable_placement_id": 0, 00:28:15.583 "enable_zerocopy_send_server": true, 00:28:15.583 "enable_zerocopy_send_client": false, 00:28:15.583 "zerocopy_threshold": 0, 00:28:15.583 "tls_version": 0, 00:28:15.583 "enable_ktls": false 00:28:15.583 } 00:28:15.583 } 00:28:15.583 ] 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "subsystem": "vmd", 00:28:15.583 "config": [] 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "subsystem": "accel", 00:28:15.583 "config": [ 00:28:15.583 { 00:28:15.583 "method": "accel_set_options", 00:28:15.583 "params": { 00:28:15.583 "small_cache_size": 128, 00:28:15.583 "large_cache_size": 16, 00:28:15.583 "task_count": 2048, 00:28:15.583 "sequence_count": 2048, 00:28:15.583 "buf_count": 2048 00:28:15.583 } 00:28:15.583 } 00:28:15.583 ] 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "subsystem": "bdev", 00:28:15.583 "config": [ 00:28:15.583 { 00:28:15.583 "method": "bdev_set_options", 00:28:15.583 "params": { 00:28:15.583 "bdev_io_pool_size": 65535, 00:28:15.583 "bdev_io_cache_size": 256, 00:28:15.583 "bdev_auto_examine": true, 00:28:15.583 "iobuf_small_cache_size": 128, 00:28:15.583 "iobuf_large_cache_size": 16 00:28:15.583 } 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "method": "bdev_raid_set_options", 00:28:15.583 "params": { 00:28:15.583 "process_window_size_kb": 1024 00:28:15.583 } 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "method": "bdev_iscsi_set_options", 00:28:15.583 "params": { 00:28:15.583 "timeout_sec": 30 00:28:15.583 } 00:28:15.583 }, 00:28:15.583 { 00:28:15.583 "method": "bdev_nvme_set_options", 00:28:15.583 "params": { 00:28:15.583 "action_on_timeout": "none", 00:28:15.583 "timeout_us": 0, 00:28:15.583 "timeout_admin_us": 0, 00:28:15.583 "keep_alive_timeout_ms": 10000, 00:28:15.583 "arbitration_burst": 0, 00:28:15.583 "low_priority_weight": 0, 00:28:15.583 "medium_priority_weight": 0, 00:28:15.583 "high_priority_weight": 0, 00:28:15.584 "nvme_adminq_poll_period_us": 10000, 00:28:15.584 "nvme_ioq_poll_period_us": 0, 00:28:15.584 "io_queue_requests": 512, 00:28:15.584 "delay_cmd_submit": true, 00:28:15.584 "transport_retry_count": 4, 00:28:15.584 "bdev_retry_count": 3, 00:28:15.584 "transport_ack_timeout": 0, 00:28:15.584 "ctrlr_loss_timeout_sec": 0, 00:28:15.584 "reconnect_delay_sec": 0, 00:28:15.584 "fast_io_fail_timeout_sec": 0, 00:28:15.584 "disable_auto_failback": false, 00:28:15.584 "generate_uuids": false, 00:28:15.584 "transport_tos": 0, 00:28:15.584 "nvme_error_stat": false, 00:28:15.584 "rdma_srq_size": 0, 00:28:15.584 "io_path_stat": false, 00:28:15.584 "allow_accel_sequence": false, 00:28:15.584 "rdma_max_cq_size": 0, 00:28:15.584 "rdma_cm_event_timeout_ms": 0, 00:28:15.584 "dhchap_digests": [ 00:28:15.584 "sha256", 00:28:15.584 "sha384", 00:28:15.584 "sha512" 00:28:15.584 ], 00:28:15.584 "dhchap_dhgroups": [ 00:28:15.584 "null", 00:28:15.584 "ffdhe2048", 00:28:15.584 "ffdhe3072", 00:28:15.584 "ffdhe4096", 00:28:15.584 "ffdhe6144", 00:28:15.584 "ffdhe8192" 00:28:15.584 ] 00:28:15.584 } 00:28:15.584 }, 00:28:15.584 { 00:28:15.584 "method": "bdev_nvme_attach_controller", 00:28:15.584 "params": { 00:28:15.584 "name": "nvme0", 00:28:15.584 "trtype": "TCP", 00:28:15.584 "adrfam": "IPv4", 00:28:15.584 "traddr": "127.0.0.1", 00:28:15.584 "trsvcid": "4420", 00:28:15.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:15.584 "prchk_reftag": false, 00:28:15.584 "prchk_guard": false, 00:28:15.584 "ctrlr_loss_timeout_sec": 0, 00:28:15.584 "reconnect_delay_sec": 0, 00:28:15.584 "fast_io_fail_timeout_sec": 0, 00:28:15.584 "psk": "key0", 00:28:15.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:15.584 "hdgst": false, 00:28:15.584 "ddgst": false 00:28:15.584 } 00:28:15.584 }, 00:28:15.584 { 00:28:15.584 "method": "bdev_nvme_set_hotplug", 00:28:15.584 "params": { 00:28:15.584 "period_us": 100000, 00:28:15.584 "enable": false 00:28:15.584 } 00:28:15.584 }, 00:28:15.584 { 00:28:15.584 "method": "bdev_wait_for_examine" 00:28:15.584 } 00:28:15.584 ] 00:28:15.584 }, 00:28:15.584 { 00:28:15.584 "subsystem": "nbd", 00:28:15.584 "config": [] 00:28:15.584 } 00:28:15.584 ] 00:28:15.584 }' 00:28:15.584 10:41:10 keyring_file -- keyring/file.sh@114 -- # killprocess 2444238 00:28:15.584 10:41:10 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2444238 ']' 00:28:15.584 10:41:10 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2444238 00:28:15.584 10:41:10 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:15.584 10:41:10 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:15.584 10:41:10 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2444238 00:28:15.584 10:41:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:15.584 10:41:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:15.584 10:41:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2444238' 00:28:15.584 killing process with pid 2444238 00:28:15.584 10:41:10 keyring_file -- common/autotest_common.sh@967 -- # kill 2444238 00:28:15.584 Received shutdown signal, test time was about 1.000000 seconds 00:28:15.584 00:28:15.584 Latency(us) 00:28:15.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.584 =================================================================================================================== 00:28:15.584 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.584 10:41:10 keyring_file -- common/autotest_common.sh@972 -- # wait 2444238 00:28:15.841 10:41:10 keyring_file -- keyring/file.sh@117 -- # bperfpid=2445704 00:28:15.841 10:41:10 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2445704 /var/tmp/bperf.sock 00:28:15.841 10:41:10 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2445704 ']' 00:28:15.841 10:41:10 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.841 10:41:10 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:15.841 10:41:10 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.841 10:41:10 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:15.841 10:41:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:15.841 10:41:10 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:15.841 10:41:10 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:15.841 "subsystems": [ 00:28:15.841 { 00:28:15.841 "subsystem": "keyring", 00:28:15.842 "config": [ 00:28:15.842 { 00:28:15.842 "method": "keyring_file_add_key", 00:28:15.842 "params": { 00:28:15.842 "name": "key0", 00:28:15.842 "path": "/tmp/tmp.el4EbKUv0Q" 00:28:15.842 } 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "method": "keyring_file_add_key", 00:28:15.842 "params": { 00:28:15.842 "name": "key1", 00:28:15.842 "path": "/tmp/tmp.JstJQ68r7E" 00:28:15.842 } 00:28:15.842 } 00:28:15.842 ] 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "subsystem": "iobuf", 00:28:15.842 "config": [ 00:28:15.842 { 00:28:15.842 "method": "iobuf_set_options", 00:28:15.842 "params": { 00:28:15.842 "small_pool_count": 8192, 00:28:15.842 "large_pool_count": 1024, 00:28:15.842 "small_bufsize": 8192, 00:28:15.842 "large_bufsize": 135168 00:28:15.842 } 00:28:15.842 } 00:28:15.842 ] 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "subsystem": "sock", 00:28:15.842 "config": [ 00:28:15.842 { 00:28:15.842 "method": "sock_set_default_impl", 00:28:15.842 "params": { 00:28:15.842 "impl_name": "posix" 00:28:15.842 } 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "method": "sock_impl_set_options", 00:28:15.842 "params": { 00:28:15.842 "impl_name": "ssl", 00:28:15.842 "recv_buf_size": 4096, 00:28:15.842 "send_buf_size": 4096, 00:28:15.842 "enable_recv_pipe": true, 00:28:15.842 "enable_quickack": false, 00:28:15.842 "enable_placement_id": 0, 00:28:15.842 "enable_zerocopy_send_server": true, 00:28:15.842 "enable_zerocopy_send_client": false, 00:28:15.842 "zerocopy_threshold": 0, 00:28:15.842 "tls_version": 0, 00:28:15.842 "enable_ktls": false 00:28:15.842 } 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "method": "sock_impl_set_options", 00:28:15.842 "params": { 00:28:15.842 "impl_name": "posix", 00:28:15.842 "recv_buf_size": 2097152, 00:28:15.842 "send_buf_size": 2097152, 00:28:15.842 "enable_recv_pipe": true, 00:28:15.842 "enable_quickack": false, 00:28:15.842 "enable_placement_id": 0, 00:28:15.842 "enable_zerocopy_send_server": true, 00:28:15.842 "enable_zerocopy_send_client": false, 00:28:15.842 "zerocopy_threshold": 0, 00:28:15.842 "tls_version": 0, 00:28:15.842 "enable_ktls": false 00:28:15.842 } 00:28:15.842 } 00:28:15.842 ] 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "subsystem": "vmd", 00:28:15.842 "config": [] 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "subsystem": "accel", 00:28:15.842 "config": [ 00:28:15.842 { 00:28:15.842 "method": "accel_set_options", 00:28:15.842 "params": { 00:28:15.842 "small_cache_size": 128, 00:28:15.842 "large_cache_size": 16, 00:28:15.842 "task_count": 2048, 00:28:15.842 "sequence_count": 2048, 00:28:15.842 "buf_count": 2048 00:28:15.842 } 00:28:15.842 } 00:28:15.842 ] 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "subsystem": "bdev", 00:28:15.842 "config": [ 00:28:15.842 { 00:28:15.842 "method": "bdev_set_options", 00:28:15.842 "params": { 00:28:15.842 "bdev_io_pool_size": 65535, 00:28:15.842 "bdev_io_cache_size": 256, 00:28:15.842 "bdev_auto_examine": true, 00:28:15.842 "iobuf_small_cache_size": 128, 00:28:15.842 "iobuf_large_cache_size": 16 00:28:15.842 } 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "method": "bdev_raid_set_options", 00:28:15.842 "params": { 00:28:15.842 "process_window_size_kb": 1024 00:28:15.842 } 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "method": "bdev_iscsi_set_options", 00:28:15.842 "params": { 00:28:15.842 "timeout_sec": 30 00:28:15.842 } 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "method": "bdev_nvme_set_options", 00:28:15.842 "params": { 00:28:15.842 "action_on_timeout": "none", 00:28:15.842 "timeout_us": 0, 00:28:15.842 "timeout_admin_us": 0, 00:28:15.842 "keep_alive_timeout_ms": 10000, 00:28:15.842 "arbitration_burst": 0, 00:28:15.842 "low_priority_weight": 0, 00:28:15.842 "medium_priority_weight": 0, 00:28:15.842 "high_priority_weight": 0, 00:28:15.842 "nvme_adminq_poll_period_us": 10000, 00:28:15.842 "nvme_ioq_poll_period_us": 0, 00:28:15.842 "io_queue_requests": 512, 00:28:15.842 "delay_cmd_submit": true, 00:28:15.842 "transport_retry_count": 4, 00:28:15.842 "bdev_retry_count": 3, 00:28:15.842 "transport_ack_timeout": 0, 00:28:15.842 "ctrlr_loss_timeout_sec": 0, 00:28:15.842 "reconnect_delay_sec": 0, 00:28:15.842 "fast_io_fail_timeout_sec": 0, 00:28:15.842 "disable_auto_failback": false, 00:28:15.842 "generate_uuids": false, 00:28:15.842 "transport_tos": 0, 00:28:15.842 "nvme_error_stat": false, 00:28:15.842 "rdma_srq_size": 0, 00:28:15.842 "io_path_stat": false, 00:28:15.842 "allow_accel_sequence": false, 00:28:15.842 "rdma_max_cq_size": 0, 00:28:15.842 "rdma_cm_event_timeout_ms": 0, 00:28:15.842 "dhchap_digests": [ 00:28:15.842 "sha256", 00:28:15.842 "sha384", 00:28:15.842 "sha512" 00:28:15.842 ], 00:28:15.842 "dhchap_dhgroups": [ 00:28:15.842 "null", 00:28:15.842 "ffdhe2048", 00:28:15.842 "ffdhe3072", 00:28:15.842 "ffdhe4096", 00:28:15.842 "ffdhe6144", 00:28:15.842 "ffdhe8192" 00:28:15.842 ] 00:28:15.842 } 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "method": "bdev_nvme_attach_controller", 00:28:15.842 "params": { 00:28:15.842 "name": "nvme0", 00:28:15.842 "trtype": "TCP", 00:28:15.842 "adrfam": "IPv4", 00:28:15.842 "traddr": "127.0.0.1", 00:28:15.842 "trsvcid": "4420", 00:28:15.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:15.842 "prchk_reftag": false, 00:28:15.842 "prchk_guard": false, 00:28:15.842 "ctrlr_loss_timeout_sec": 0, 00:28:15.842 "reconnect_delay_sec": 0, 00:28:15.842 "fast_io_fail_timeout_sec": 0, 00:28:15.842 "psk": "key0", 00:28:15.842 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:15.842 "hdgst": false, 00:28:15.842 "ddgst": false 00:28:15.842 } 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "method": "bdev_nvme_set_hotplug", 00:28:15.842 "params": { 00:28:15.842 "period_us": 100000, 00:28:15.842 "enable": false 00:28:15.842 } 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "method": "bdev_wait_for_examine" 00:28:15.842 } 00:28:15.842 ] 00:28:15.842 }, 00:28:15.842 { 00:28:15.842 "subsystem": "nbd", 00:28:15.842 "config": [] 00:28:15.842 } 00:28:15.842 ] 00:28:15.842 }' 00:28:15.842 [2024-07-15 10:41:10.422055] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:15.842 [2024-07-15 10:41:10.422141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2445704 ] 00:28:15.842 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.842 [2024-07-15 10:41:10.479319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.100 [2024-07-15 10:41:10.594075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.356 [2024-07-15 10:41:10.783136] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:16.922 10:41:11 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:16.922 10:41:11 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:16.922 10:41:11 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:16.922 10:41:11 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:16.922 10:41:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.179 10:41:11 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:17.179 10:41:11 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:17.179 10:41:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:17.179 10:41:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:17.179 10:41:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:17.179 10:41:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.179 10:41:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:17.437 10:41:11 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:17.437 10:41:11 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:17.437 10:41:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:17.437 10:41:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:17.437 10:41:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:17.437 10:41:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:17.437 10:41:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.693 10:41:12 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:17.693 10:41:12 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:17.693 10:41:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:17.693 10:41:12 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:17.951 10:41:12 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:17.951 10:41:12 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:17.951 10:41:12 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.el4EbKUv0Q /tmp/tmp.JstJQ68r7E 00:28:17.951 10:41:12 keyring_file -- keyring/file.sh@20 -- # killprocess 2445704 00:28:17.951 10:41:12 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2445704 ']' 00:28:17.951 10:41:12 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2445704 00:28:17.951 10:41:12 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:17.951 10:41:12 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:17.951 10:41:12 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2445704 00:28:17.951 10:41:12 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:17.951 10:41:12 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:17.951 10:41:12 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2445704' 00:28:17.951 killing process with pid 2445704 00:28:17.951 10:41:12 keyring_file -- common/autotest_common.sh@967 -- # kill 2445704 00:28:17.951 Received shutdown signal, test time was about 1.000000 seconds 00:28:17.951 00:28:17.951 Latency(us) 00:28:17.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.951 =================================================================================================================== 00:28:17.951 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:17.951 10:41:12 keyring_file -- common/autotest_common.sh@972 -- # wait 2445704 00:28:18.208 10:41:12 keyring_file -- keyring/file.sh@21 -- # killprocess 2444228 00:28:18.208 10:41:12 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2444228 ']' 00:28:18.208 10:41:12 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2444228 00:28:18.208 10:41:12 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:18.208 10:41:12 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:18.208 10:41:12 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2444228 00:28:18.208 10:41:12 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:18.208 10:41:12 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:18.208 10:41:12 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2444228' 00:28:18.208 killing process with pid 2444228 00:28:18.208 10:41:12 keyring_file -- common/autotest_common.sh@967 -- # kill 2444228 00:28:18.208 [2024-07-15 10:41:12.720578] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:18.208 10:41:12 keyring_file -- common/autotest_common.sh@972 -- # wait 2444228 00:28:18.779 00:28:18.779 real 0m14.942s 00:28:18.779 user 0m36.768s 00:28:18.779 sys 0m3.273s 00:28:18.779 10:41:13 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:18.779 10:41:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:18.779 ************************************ 00:28:18.779 END TEST keyring_file 00:28:18.779 ************************************ 00:28:18.779 10:41:13 -- common/autotest_common.sh@1142 -- # return 0 00:28:18.779 10:41:13 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:28:18.779 10:41:13 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:18.779 10:41:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:18.779 10:41:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:18.779 10:41:13 -- common/autotest_common.sh@10 -- # set +x 00:28:18.779 ************************************ 00:28:18.779 START TEST keyring_linux 00:28:18.779 ************************************ 00:28:18.779 10:41:13 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:18.779 * Looking for test storage... 00:28:18.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:18.779 10:41:13 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.779 10:41:13 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.779 10:41:13 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.779 10:41:13 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.779 10:41:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.779 10:41:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.779 10:41:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.779 10:41:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:18.779 10:41:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:18.779 10:41:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:18.779 10:41:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:18.779 10:41:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:18.779 10:41:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:18.779 10:41:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:18.779 10:41:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:18.779 /tmp/:spdk-test:key0 00:28:18.779 10:41:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:18.779 10:41:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:18.779 10:41:13 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:18.780 10:41:13 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:18.780 10:41:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:18.780 10:41:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:18.780 /tmp/:spdk-test:key1 00:28:18.780 10:41:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2446189 00:28:18.780 10:41:13 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:18.780 10:41:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2446189 00:28:18.780 10:41:13 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2446189 ']' 00:28:18.780 10:41:13 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.780 10:41:13 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:18.780 10:41:13 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.780 10:41:13 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:18.780 10:41:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:18.780 [2024-07-15 10:41:13.409390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:18.780 [2024-07-15 10:41:13.409474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2446189 ] 00:28:19.036 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.036 [2024-07-15 10:41:13.466747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.036 [2024-07-15 10:41:13.575887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:19.967 10:41:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:19.967 [2024-07-15 10:41:14.342685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.967 null0 00:28:19.967 [2024-07-15 10:41:14.374726] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:19.967 [2024-07-15 10:41:14.375241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.967 10:41:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:19.967 268619100 00:28:19.967 10:41:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:19.967 895039639 00:28:19.967 10:41:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2446326 00:28:19.967 10:41:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2446326 /var/tmp/bperf.sock 00:28:19.967 10:41:14 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2446326 ']' 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:19.967 10:41:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:19.967 [2024-07-15 10:41:14.443226] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:19.967 [2024-07-15 10:41:14.443323] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2446326 ] 00:28:19.967 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.967 [2024-07-15 10:41:14.506210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.224 [2024-07-15 10:41:14.624640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.224 10:41:14 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.224 10:41:14 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:20.224 10:41:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:20.224 10:41:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:20.481 10:41:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:20.481 10:41:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:20.739 10:41:15 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:20.739 10:41:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:20.996 [2024-07-15 10:41:15.463570] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:20.996 nvme0n1 00:28:20.996 10:41:15 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:20.996 10:41:15 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:20.996 10:41:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:20.996 10:41:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:20.996 10:41:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:20.996 10:41:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:21.253 10:41:15 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:21.253 10:41:15 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:21.253 10:41:15 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:21.253 10:41:15 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:21.253 10:41:15 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:21.253 10:41:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:21.253 10:41:15 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:21.510 10:41:16 keyring_linux -- keyring/linux.sh@25 -- # sn=268619100 00:28:21.510 10:41:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:21.510 10:41:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:21.510 10:41:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 268619100 == \2\6\8\6\1\9\1\0\0 ]] 00:28:21.510 10:41:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 268619100 00:28:21.510 10:41:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:21.510 10:41:16 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.510 Running I/O for 1 seconds... 00:28:22.883 00:28:22.883 Latency(us) 00:28:22.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.883 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:22.883 nvme0n1 : 1.02 4766.44 18.62 0.00 0.00 26609.15 5267.15 31651.46 00:28:22.883 =================================================================================================================== 00:28:22.883 Total : 4766.44 18.62 0.00 0.00 26609.15 5267.15 31651.46 00:28:22.883 0 00:28:22.883 10:41:17 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:22.883 10:41:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:22.883 10:41:17 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:22.883 10:41:17 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:22.883 10:41:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:22.883 10:41:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:22.883 10:41:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:22.883 10:41:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:23.142 10:41:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:23.142 10:41:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:23.142 10:41:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:23.142 10:41:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:23.142 10:41:17 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:28:23.142 10:41:17 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:23.142 10:41:17 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:23.142 10:41:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.142 10:41:17 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:23.142 10:41:17 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:23.142 10:41:17 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:23.142 10:41:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:23.401 [2024-07-15 10:41:17.927418] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:23.401 [2024-07-15 10:41:17.927809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26cb3f0 (107): Transport endpoint is not connected 00:28:23.401 [2024-07-15 10:41:17.928803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26cb3f0 (9): Bad file descriptor 00:28:23.401 [2024-07-15 10:41:17.929801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:23.401 [2024-07-15 10:41:17.929833] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:23.401 [2024-07-15 10:41:17.929850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:23.401 request: 00:28:23.401 { 00:28:23.401 "name": "nvme0", 00:28:23.401 "trtype": "tcp", 00:28:23.401 "traddr": "127.0.0.1", 00:28:23.401 "adrfam": "ipv4", 00:28:23.401 "trsvcid": "4420", 00:28:23.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:23.401 "prchk_reftag": false, 00:28:23.401 "prchk_guard": false, 00:28:23.401 "hdgst": false, 00:28:23.401 "ddgst": false, 00:28:23.401 "psk": ":spdk-test:key1", 00:28:23.401 "method": "bdev_nvme_attach_controller", 00:28:23.401 "req_id": 1 00:28:23.401 } 00:28:23.401 Got JSON-RPC error response 00:28:23.401 response: 00:28:23.401 { 00:28:23.401 "code": -5, 00:28:23.401 "message": "Input/output error" 00:28:23.401 } 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@33 -- # sn=268619100 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 268619100 00:28:23.401 1 links removed 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@33 -- # sn=895039639 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 895039639 00:28:23.401 1 links removed 00:28:23.401 10:41:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2446326 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2446326 ']' 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2446326 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2446326 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2446326' 00:28:23.401 killing process with pid 2446326 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 2446326 00:28:23.401 Received shutdown signal, test time was about 1.000000 seconds 00:28:23.401 00:28:23.401 Latency(us) 00:28:23.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.401 =================================================================================================================== 00:28:23.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.401 10:41:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 2446326 00:28:23.660 10:41:18 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2446189 00:28:23.660 10:41:18 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2446189 ']' 00:28:23.660 10:41:18 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2446189 00:28:23.660 10:41:18 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:23.660 10:41:18 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.660 10:41:18 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2446189 00:28:23.660 10:41:18 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:23.660 10:41:18 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:23.660 10:41:18 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2446189' 00:28:23.660 killing process with pid 2446189 00:28:23.660 10:41:18 keyring_linux -- common/autotest_common.sh@967 -- # kill 2446189 00:28:23.660 10:41:18 keyring_linux -- common/autotest_common.sh@972 -- # wait 2446189 00:28:24.226 00:28:24.226 real 0m5.518s 00:28:24.226 user 0m10.021s 00:28:24.226 sys 0m1.562s 00:28:24.226 10:41:18 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:24.226 10:41:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:24.226 ************************************ 00:28:24.226 END TEST keyring_linux 00:28:24.226 ************************************ 00:28:24.226 10:41:18 -- common/autotest_common.sh@1142 -- # return 0 00:28:24.226 10:41:18 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:24.226 10:41:18 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:24.226 10:41:18 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:24.226 10:41:18 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:24.226 10:41:18 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:24.226 10:41:18 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:24.226 10:41:18 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:24.226 10:41:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:24.226 10:41:18 -- common/autotest_common.sh@10 -- # set +x 00:28:24.226 10:41:18 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:24.226 10:41:18 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:24.226 10:41:18 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:24.226 10:41:18 -- common/autotest_common.sh@10 -- # set +x 00:28:26.126 INFO: APP EXITING 00:28:26.126 INFO: killing all VMs 00:28:26.126 INFO: killing vhost app 00:28:26.126 INFO: EXIT DONE 00:28:27.061 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:28:27.061 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:27.061 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:27.061 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:27.061 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:27.061 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:27.061 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:27.061 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:27.061 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:27.061 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:27.061 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:27.061 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:27.061 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:27.061 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:27.061 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:27.061 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:27.061 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:28.438 Cleaning 00:28:28.438 Removing: /var/run/dpdk/spdk0/config 00:28:28.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:28.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:28.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:28.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:28.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:28.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:28.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:28.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:28.438 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:28.438 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:28.438 Removing: /var/run/dpdk/spdk1/config 00:28:28.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:28.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:28.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:28.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:28.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:28.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:28.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:28.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:28.438 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:28.438 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:28.438 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:28.438 Removing: /var/run/dpdk/spdk2/config 00:28:28.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:28.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:28.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:28.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:28.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:28.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:28.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:28.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:28.438 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:28.438 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:28.438 Removing: /var/run/dpdk/spdk3/config 00:28:28.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:28.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:28.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:28.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:28.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:28.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:28.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:28.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:28.438 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:28.438 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:28.438 Removing: /var/run/dpdk/spdk4/config 00:28:28.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:28.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:28.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:28.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:28.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:28.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:28.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:28.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:28.438 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:28.438 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:28.438 Removing: /dev/shm/bdev_svc_trace.1 00:28:28.438 Removing: /dev/shm/nvmf_trace.0 00:28:28.438 Removing: /dev/shm/spdk_tgt_trace.pid2184186 00:28:28.438 Removing: /var/run/dpdk/spdk0 00:28:28.438 Removing: /var/run/dpdk/spdk1 00:28:28.438 Removing: /var/run/dpdk/spdk2 00:28:28.438 Removing: /var/run/dpdk/spdk3 00:28:28.438 Removing: /var/run/dpdk/spdk4 00:28:28.438 Removing: /var/run/dpdk/spdk_pid2182523 00:28:28.438 Removing: /var/run/dpdk/spdk_pid2183252 00:28:28.438 Removing: /var/run/dpdk/spdk_pid2184186 00:28:28.438 Removing: /var/run/dpdk/spdk_pid2184504 00:28:28.438 Removing: /var/run/dpdk/spdk_pid2185191 00:28:28.438 Removing: /var/run/dpdk/spdk_pid2185332 00:28:28.438 Removing: /var/run/dpdk/spdk_pid2186050 00:28:28.438 Removing: /var/run/dpdk/spdk_pid2186190 00:28:28.438 Removing: /var/run/dpdk/spdk_pid2186432 00:28:28.697 Removing: /var/run/dpdk/spdk_pid2187620 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2188668 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2188979 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2189170 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2189371 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2189561 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2189722 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2189995 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2190174 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2190496 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2192848 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2193047 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2193405 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2193422 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2193785 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2193857 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2194664 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2194781 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2195089 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2195179 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2195391 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2195508 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2195904 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2196063 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2196256 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2196424 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2196568 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2196635 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2196909 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2197070 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2197229 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2197497 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2197664 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2197819 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2198091 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2198259 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2198411 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2198685 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2198845 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2199024 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2199271 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2199435 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2199662 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2199868 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2200027 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2200309 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2200460 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2200625 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2200812 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2201018 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2203193 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2229268 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2232236 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2239356 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2242670 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2245014 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2245422 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2249396 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2253236 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2253243 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2253893 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2254437 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2255096 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2255500 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2255511 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2255770 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2255784 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2255818 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2256454 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2257106 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2257691 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2258177 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2258181 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2258366 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2259468 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2260193 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2266295 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2266567 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2269163 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2272917 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2275111 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2281632 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2286823 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2288027 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2288686 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2299777 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2301992 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2327537 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2330332 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2331510 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2332820 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2332959 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2333095 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2333122 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2333556 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2334868 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2335720 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2336032 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2337779 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2338335 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2338832 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2341304 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2347319 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2350102 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2354488 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2355546 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2356640 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2359320 00:28:28.698 Removing: /var/run/dpdk/spdk_pid2361573 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2365892 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2365900 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2368672 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2368803 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2369064 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2369331 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2369342 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2372093 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2372433 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2375102 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2377078 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2380507 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2383968 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2390923 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2395389 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2395400 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2407604 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2408136 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2408546 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2409078 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2409652 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2410066 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2410470 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2410882 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2413378 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2413641 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2417430 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2417614 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2419223 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2424881 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2424889 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2427777 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2429062 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2430459 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2431317 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2432729 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2433597 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2438837 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2439167 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2439554 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2441108 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2441508 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2441787 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2444228 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2444238 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2445704 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2446189 00:28:28.957 Removing: /var/run/dpdk/spdk_pid2446326 00:28:28.957 Clean 00:28:28.957 10:41:23 -- common/autotest_common.sh@1451 -- # return 0 00:28:28.957 10:41:23 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:28.957 10:41:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:28.957 10:41:23 -- common/autotest_common.sh@10 -- # set +x 00:28:28.957 10:41:23 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:28.957 10:41:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:28.957 10:41:23 -- common/autotest_common.sh@10 -- # set +x 00:28:28.957 10:41:23 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:28.957 10:41:23 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:28.957 10:41:23 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:28.957 10:41:23 -- spdk/autotest.sh@391 -- # hash lcov 00:28:28.957 10:41:23 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:28.957 10:41:23 -- spdk/autotest.sh@393 -- # hostname 00:28:28.957 10:41:23 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:29.216 geninfo: WARNING: invalid characters removed from testname! 00:29:07.968 10:41:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:07.968 10:42:01 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:10.493 10:42:04 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:13.019 10:42:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:16.299 10:42:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:19.586 10:42:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:22.153 10:42:16 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:22.153 10:42:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.153 10:42:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:22.153 10:42:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.153 10:42:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.153 10:42:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.153 10:42:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.153 10:42:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.153 10:42:16 -- paths/export.sh@5 -- $ export PATH 00:29:22.153 10:42:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.153 10:42:16 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:22.153 10:42:16 -- common/autobuild_common.sh@444 -- $ date +%s 00:29:22.153 10:42:16 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721032936.XXXXXX 00:29:22.153 10:42:16 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721032936.IN14VY 00:29:22.153 10:42:16 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:29:22.153 10:42:16 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:29:22.153 10:42:16 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:22.153 10:42:16 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:22.153 10:42:16 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:22.153 10:42:16 -- common/autobuild_common.sh@460 -- $ get_config_params 00:29:22.153 10:42:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:22.153 10:42:16 -- common/autotest_common.sh@10 -- $ set +x 00:29:22.153 10:42:16 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:22.153 10:42:16 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:29:22.153 10:42:16 -- pm/common@17 -- $ local monitor 00:29:22.153 10:42:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.153 10:42:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.153 10:42:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.153 10:42:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.153 10:42:16 -- pm/common@21 -- $ date +%s 00:29:22.153 10:42:16 -- pm/common@21 -- $ date +%s 00:29:22.153 10:42:16 -- pm/common@25 -- $ sleep 1 00:29:22.153 10:42:16 -- pm/common@21 -- $ date +%s 00:29:22.153 10:42:16 -- pm/common@21 -- $ date +%s 00:29:22.153 10:42:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032936 00:29:22.153 10:42:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032936 00:29:22.153 10:42:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032936 00:29:22.153 10:42:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032936 00:29:22.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032936_collect-vmstat.pm.log 00:29:22.153 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032936_collect-cpu-load.pm.log 00:29:22.154 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032936_collect-cpu-temp.pm.log 00:29:22.154 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032936_collect-bmc-pm.bmc.pm.log 00:29:23.095 10:42:17 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:29:23.095 10:42:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:29:23.095 10:42:17 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:23.095 10:42:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:23.095 10:42:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:23.095 10:42:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:23.095 10:42:17 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:23.095 10:42:17 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:23.095 10:42:17 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:23.095 10:42:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:23.095 10:42:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:23.095 10:42:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:23.095 10:42:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:23.095 10:42:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.095 10:42:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:23.095 10:42:17 -- pm/common@44 -- $ pid=2456662 00:29:23.095 10:42:17 -- pm/common@50 -- $ kill -TERM 2456662 00:29:23.095 10:42:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.095 10:42:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:23.095 10:42:17 -- pm/common@44 -- $ pid=2456664 00:29:23.095 10:42:17 -- pm/common@50 -- $ kill -TERM 2456664 00:29:23.095 10:42:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.095 10:42:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:23.095 10:42:17 -- pm/common@44 -- $ pid=2456666 00:29:23.095 10:42:17 -- pm/common@50 -- $ kill -TERM 2456666 00:29:23.095 10:42:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.095 10:42:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:23.095 10:42:17 -- pm/common@44 -- $ pid=2456695 00:29:23.095 10:42:17 -- pm/common@50 -- $ sudo -E kill -TERM 2456695 00:29:23.095 + [[ -n 2098215 ]] 00:29:23.095 + sudo kill 2098215 00:29:23.108 [Pipeline] } 00:29:23.132 [Pipeline] // stage 00:29:23.139 [Pipeline] } 00:29:23.159 [Pipeline] // timeout 00:29:23.166 [Pipeline] } 00:29:23.184 [Pipeline] // catchError 00:29:23.192 [Pipeline] } 00:29:23.214 [Pipeline] // wrap 00:29:23.222 [Pipeline] } 00:29:23.243 [Pipeline] // catchError 00:29:23.253 [Pipeline] stage 00:29:23.256 [Pipeline] { (Epilogue) 00:29:23.275 [Pipeline] catchError 00:29:23.277 [Pipeline] { 00:29:23.297 [Pipeline] echo 00:29:23.299 Cleanup processes 00:29:23.307 [Pipeline] sh 00:29:23.596 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:23.596 2456811 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:29:23.596 2456928 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:23.613 [Pipeline] sh 00:29:23.900 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:23.900 ++ grep -v 'sudo pgrep' 00:29:23.900 ++ awk '{print $1}' 00:29:23.900 + sudo kill -9 2456811 00:29:23.913 [Pipeline] sh 00:29:24.200 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:32.332 [Pipeline] sh 00:29:32.618 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:32.618 Artifacts sizes are good 00:29:32.633 [Pipeline] archiveArtifacts 00:29:32.640 Archiving artifacts 00:29:32.868 [Pipeline] sh 00:29:33.151 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:33.167 [Pipeline] cleanWs 00:29:33.179 [WS-CLEANUP] Deleting project workspace... 00:29:33.179 [WS-CLEANUP] Deferred wipeout is used... 00:29:33.187 [WS-CLEANUP] done 00:29:33.189 [Pipeline] } 00:29:33.215 [Pipeline] // catchError 00:29:33.229 [Pipeline] sh 00:29:33.511 + logger -p user.info -t JENKINS-CI 00:29:33.521 [Pipeline] } 00:29:33.538 [Pipeline] // stage 00:29:33.544 [Pipeline] } 00:29:33.561 [Pipeline] // node 00:29:33.567 [Pipeline] End of Pipeline 00:29:33.597 Finished: SUCCESS